text
stringlengths
1.23k
293k
tokens
float64
290
66.5k
created
stringdate
1-01-01 00:00:00
2024-12-01 00:00:00
fields
listlengths
1
6
Tax Ethics and Tax Evasion, Evidence from Greece Tax evasion involves the deliberate act of noncompliance to tax legislation and the disregard of tax payments from the citizens. Moreover, many citizens do not directly violate tax legislation, but avoid paying taxes by taking advantage the lack of explicit legislation. Along those lines, tax ethics forms a personal constraint which regulates the way citizens behave regarding the payment of taxes. The purpose of this paper is to explore the tax ethics of Greek citizens and to compare them with those of other countries. In order to compare our results with those of other countries, we used a questionnaire, developed by Prof. Robert W. McGee, which has been used as a research instrument in similar studies. Our results show that Greek citizens do not evade taxes due to potential personal gains but rather because they believe that the state is incapable of proper allocation of public money and that the current political and tax system is inefficient or corrupt. In conclusion, the state has to gain the trust of its citizens in order to increase tax ethics and tackle tax evasion, by allocating its resources efficiently and by promoting political transparency. Introduction Withholding information regarding one's income in order to pay less tax has moral and legal implications [1].The state needs sources of income, in order to be able to offer services and provide security to its citizens.On the other hand, the government frequently wastes public money due to inefficient management and sometimes due to illegal abuse.Consequently, citizens deviate from the legal Since 2008, in the light of the economic crisis, more strict measures have been taken regarding the detection of "concealed" income [2].In this scope, governing practices regarding tax revenue collection have been developed in the European Union (EU).Moreover, exchange of information regarding "concealed" income is also promoted between member states and countries such as Switzerland, Monaco, San Marino and Liechtenstein [3].During the last years, the European Parliament has greatly contributed on the implementation of initiatives regarding the mitigation of tax evasion and money laundering from illegal activities [4].As a result, governments in the EU have enacted and enforced more sophisticated legislation against tax evasion.Greece has a very strict legal system. However, the current "overregulation" often results in contradictions between various laws and law complexity, which enhances tax evasion.At the same time, there is a significant decrease in VAT revenue in Greece, compared to the rest of the European Union. The aim of this paper is to explore the issue of tax ethics and tax evasion in order to identify in which situations tax evasion is accepted as ethical and which social groups consider tax evasion to be more acceptable.Moreover, we will analyze the tax ethics of Greek citizens and compare it to that of citizens from other countries.Regarding the contribution of our paper to academic research, we will use Robert McGee's questionnaire to measure tax evasion and tax ethics of Greek citizens.To our knowledge, no other research has been conducted having used an internationally recognized research tool to compare Greece to other countries and identify the characteristics of citizens which make them justify tax evasion. The paper is structured as follows: first, the concepts of tax ethics and tax evasion are presented and a literature review is conducted.Next, we introduce the research methodology and present the research questions as well as the measurement instrument.Then the results of the empirical survey are presented. The paper concludes with the main conclusions, limitations and proposals for future research. Conceptual Framework Tax ethics refers to the tax payer's moral obligation to pay taxes and is affected by the relationship between the tax payer, as a citizen, and the government.Tax ethics and tax evasion are often used interchangeably.Tax evasion, however, refers to a person actively trying to avoid paying tax [5] while tax ethics refers to all non-monetary incentives to comply with tax legislation [6].Measuring tax ethics helps to indirectly measure the size of "shadow economy", since not many individuals are willing to reveal whether they have committed financial irregularities or not.Consequently, a higher degree of tax ethics is positively related to regular tax payments and compliance to tax regulations.On the contrary, indi- Literature Review The attitude of citizens towards tax obligations is affected by various factors [6]. For example, several studies show that tax ethics is higher among older citizens, citizens who have strong religious beliefs or citizens who trust politicians. Moreover, international research has shown that tax ethics is positively related to tax system fairness and government transparency and negatively related to tax evasion and shadow economy [5]. In Portugal, a survey conducted by the European Values Study during the period 2008 to 2010 showed that behavioral, psychological and political factors have a direct impact on tax ethics.Fairness in the democratic political system and citizens' personal satisfaction with the system has a direct influence on tax ethics.Alternatively, citizens who believe that their political system is good have stronger tax ethics.On the other hand, there is a negative relationship between tax ethics, political participation and mistrust, i.e. citizens who are actively involved in politics and who do not trust others have weaker tax ethics [8].In Turkey, trust in the state, religion, and national pride are positively related to compliance to social requirements such as tax payment.In other words, Turkish citizens who feel confident of social norms have a high level of tax ethics compared to those who do not [9]. In Spain, unlike Turkey, trust in the political system and religion do not have an effect on tax ethics.Nevertheless, national pride and a higher level of education have a positive impact on tax ethics [9].In Russia, the World Value Survey In Pakistan, survey results show that women have significantly higher tax ethics than men.Similarly, pensioners comply more with tax legislation than non-pensioners while self-employed citizens have weaker tax ethics [10].In Mauritania, similar to other surveys, married citizens and the elderly comply more with tax legislation compared to younger citizens.Moreover, justice, confidence in the government, and perceived fairness of how public money is spent improve the willingness of taxpayers to comply with tax requirements.Along those lines, tax compliance increases when citizens believe that the tax and legal system are fair, clear and stable, and corruption is being effectively tackled [11]. In Latvia, legitimacy of the tax authorities and the government, national pride and perception of the danger and severity of punishment were found to be positively related to high tax ethics for business owners and managers [12]. Finally, in a recent survey, in which data was collected from 35 Eurasian countries, tax ethics is higher in rich countries, in countries which have increased public revenues and expenditures, in countries with a fair and clear legal system and in countries with a low level of social inequality and corruption. Moreover, citizens of higher education and citizens with children have a higher sense of tax compliance [13]. Research Methodology The purpose of this paper is to examine the perceptions of Greek taxpayers regarding tax evasion and to explore the circumstances under which tax evasion is accepted or considered acceptable.Moreover, we will compare the results of our survey with those of other countries and highlight the differences between the tax ethics of Greek citizens and those of citizens of other countries.A quantitative approach methodology was selected for this study.The participants were contacted via social media and email and were requested to answer an online questionnaire.The questionnaire used in this paper has been developed by Professor McGee and has been used as a research tool in various studies ( [1] [14]- [22]).The questionnaire is still used in academic research [23].In the first part of the questionnaire, participants were asked to respond to questions regarding demographic characteristics such as gender, occupational status, and level of education, marital and financial status.Then, they were asked to express their level of agreement/ disagreement to statements regarding tax evasion at a seven-step Likert type scale.The scale was rated as: 1-absolutely agree, 2-agree, Demographic Characteristics A total of 500 questionnaires were sent via email and social media and 305 complete questionnaires were received representing a response rate of 61.00%.To increase response rate three reminders were sent to each target respondent.The first was after one week from the initial email posting, the second after two weeks from the initial posting and the third after four weeks from the initial email posting.financial situation, the majority reported an annual income between € 10,000 and € 30,000 per year (57.7%), followed by those who have a low income of up to € 10,000 per year (29.2%), while about 10% reported an annual income of more than € 30,000 per year.Finally, regarding the marital status, 59.3% were married and 36.7% were unmarried. Results of Descriptive Statistics The mean values of the survey items are presented below ( the relationship between tax ethics and whether the respondent approves or disapproves a potential public funded project, have received similar rating on average.This means that Greek citizens do not link tax ethics with their own personal beliefs regarding the use of public money, as long as it is not wasted or abused.This is also supported by the mean values of items V08 and V09, which examine the relationship between tax ethics and the respondent's personal gains from the implantation of public projects.We both items have received high mean values which furthers strengthens our claim that Greek citizens do not evade taxes due to potential personal gains but rather because they believe that the state is incapable of proper allocation of public money. Finally, regarding the role of tax collection agencies, we observe that item V12, which corresponds to the relationship between tax evasion and tax audit, has also a very high mean value.This finding is particularly interesting because it shows that tax ethics is not necessarily affected by the severity of tax control, since most respondents believe that tax evasion is unethical despite the low chance of "being caught". Comparison with Other Countries This research was based on a questionnaire developed by Prof. Robert W. McGee, which has been used as a research instrument in similar studies.Table 3 presents the results of similar surveys that have been conducted in 8 different countries. Compared to other countries, Greek citizens have less tolerance on tax evasion, with an average of 5.16, while the average of rest of the countries is 5.04. The country with the weakest tax ethics is Romania with an average of 4.59, while the country with the highest tax ethics appears to be Hong Kong, although data regarding the questions related to state discrimination are not provided. Taking into account all the items of the questionnaire, Guatemala is the country with the highest tax ethics rate.The question that has received the lowest ranking in all countries is the one related to corrupted politicians and abuse of public money.On the other hand, items V02 and V05 present the highest mean values in all countries, which means that tax ethics is positively affected by justice and efficiency in the tax system and proper allocation of public money. Our results are in accordance to those of similar studies.In an empirical study by [24] regarding tax evasion, taxpayers' perceptions about the integrity of government officials, accountability and transparency have a significant impact on tax ethics.The same results are observed by [25], who also state that taxpayers are more likely to be consistent with their tax obligations if they believe that the government manages public money in an efficient and transparent manner.Regarding our research results, citizens are willing to pay more taxes if they believe that the government is using public money to enhance social welfare by financing projects that are beneficial not necessarily to the respondent himself but at least to the rest of the citizens.If this relationship is not clear, taxpayers are less likely to be consistent with their tax obligations, since they believe that their contribution to general prosperity does not benefit either themselves or their fellow citizens, but it is wasted through mismanagement and corruption [26]. Conclusions Tax evasion is an economic and social phenomenon that has always been in the centre of public policy.Taxpayers commit tax evasion when they belief that public money is not allocated efficiently from the government to the citizens.Consequently, transparency is of outmost importance to taxpayers in order for them to believe that their contribution to public income is important and necessary. Therefore, it is very important for the state to have a legal framework that holds politicians accountable for their decisions and actions regarding the management of public money.Rebuilding the trust between the government and the citizens is perhaps the only solution to tackle tax evasion and this is where the current political system should focus its efforts on. Comparing the results of our study with those of similar studies in other countries, we observe that there are no significant differences in the way tax ethics is being perceived by the citizens.Perhaps the only noticeable difference between Greece and the majority of the other countries analyzed in the present study is the fact that tax ethics receives higher mean values in all the questions regarding discrimination, which means that Greek citizens do not associate tax obligations with human rights and how the government interacts with citizens from different countries. The current research instrument has been used to measure tax ethics is Greece for the first.We propose that further research should be conducted in order to make generalizations about this study's subject.The main constraint and limitation of the current research is the bias of the survey respondents.Future research should focus on a qualitative approach to understand the underlying factors that lead citizens to tax evasion.For example, in-depth interviews of different population groups could contribute into further understanding how tax evasion manifests itself in different social contexts. G . Drogalas et al.DOI: 10.4236/tel.2018.850701019 Theoretical Economics Letters procedure regarding the payment of taxes, while the government sometimes crosses the limits of good public administration. G. Drogalas et al.DOI: 10.4236/tel.2018.850701021 Theoretical Economics Letters research findings for the years 1999, 2006 and 2011 were analyzed over time, as significant changes have been made to the tax system and the way the state operates.The analysis showed a decrease in the level of tax ethics in recent years,which can be explained by the tax changes made in favor of the high income citizens.These changes have probably affected the tax ethics of the low income citizens and their desire to be consistent with their tax obligations. Table 1 . The collection of questionnaires began on the 10th of September 2017 and completed on the 20th of October 2017.The descriptive statistics of the final sample are presented on Table1.Demographics of respondents. Table 2 . Research Items and Mean Values. Table 3 . Comparing answers of Greeks of different ages and citizens of 8 countries.
3,455.2
2018-03-19T00:00:00.000
[ "Political Science", "Economics" ]
Unveiling the genetic architecture and transmission dynamics of a novel multidrug-resistant plasmid harboring blaNDM-5 in E. Coli ST167: implications for antibiotic resistance management Background The emergence of multidrug-resistant (MDR) Escherichia coli strains poses significant challenges in clinical settings, particularly when these strains harbor New Delhi metallo-ß-lactamase (NDM) gene, which confer resistance to carbapenems, a critical class of last-resort antibiotics. This study investigates the genetic characteristics and implications of a novel blaNDM-5-carrying plasmid pNDM-5-0083 isolated from an E. coli strain GZ04-0083 from clinical specimen in Zhongshan, China. Results Phenotypic and genotypic evaluations confirmed that the E. coli ST167 strain GZ04-0083 is a multidrug-resistant organism, showing resistance to diverse classes of antibiotics including ß-lactams, carbapenems, fluoroquinolones, aminoglycosides, and sulfonamides, while maintaining susceptibility to monobactams. Investigations involving S1 pulsed-field gel electrophoresis, Southern blot analysis, and conjugation experiments, alongside genomic sequencing, confirmed the presence of the blaNDM-5 gene within a 146-kb IncFIB plasmid pNDM-5-0083. This evidence underscores a significant risk for the horizontal transfer of resistance genes among bacterial populations. Detailed annotations of genetic elements—such as resistance genes, transposons, and insertion sequences—and comparative BLAST analyses with other blaNDM-5-carrying plasmids, revealed a unique architectural configuration in the pNDM-5-0083. The MDR region of this plasmid shares a conserved gene arrangement (repA-IS15DIV-blaNDM-5-bleMBL-IS91-suI2-aadA2-dfrA12) with three previously reported plasmids, indicating a potential for dynamic genetic recombination and evolution within the MDR region. Additionally, the integration of virulence factors, including the iro and sit gene clusters and enolase, into its genetic architecture poses further therapeutic challenges by enhancing the strain’s pathogenicity through improved host tissue colonization, immune evasion, and increased infection severity. Conclusions The detailed identification and characterization of pNDM-5-0083 enhance our understanding of the mechanisms facilitating the spread of carbapenem resistance. This study illuminates the intricate interplay among various genetic elements within the novel blaNDM-5-carrying plasmid, which are crucial for the stability and mobility of resistance genes across bacterial populations. These insights highlight the urgent need for ongoing surveillance and the development of effective strategies to curb the proliferation of antibiotic resistance. Supplementary Information The online version contains supplementary material available at 10.1186/s12866-024-03333-1. Background Multi-drug-resistance (MDR) bacterial pathogens have been increasing worldwide and is now considered a significant public health threat.Several recent studies have documented the emergence of MDR Enterobacteriaceae from various sources [1][2][3][4][5][6][7], underscoring the need for proper antibiotic use.Besides, routine antimicrobial susceptibility testing is crucial to determine the most effective antibiotic treatments and to monitor the spread of emerging MDR strains.According to recent surveillance, the World Health Organization (WHO) has ranked carbapenemase-producing Enterobacteriaceae (CPE) as the third MDR bacteria in priority level one [8,9], due to their ability to resist carbapenems which considered the last line of defense against MDR infections [4,10].These pathogens, including prominent species like Klebsiella pneumoniae and Escherichia coli (E.coli), have shown a disturbing increase in prevalence across various regions worldwide [11,12].This surge is largely driven by the acquisition and dissemination of carbapenemase genes such as bla KPC , bla NDM , and bla OXA [13,14], which confer high-level resistance to nearly all ß-lactams, including carbapenems. To date, 24 variants of the New Delhi metallo-ß-lactamase (NDM) gene have been identified in bacteria harboring bla NDM across the globe [13,[15][16][17], in which the NDM-5 is one of the most common variants encountered among Enterobacteriaceae with highly restricted availability of sensitive antibiotics [18].Although reports of CPE carrying bla NDM-5 from various regions have attracted widespread attention [18][19][20], its genetic backgrounds are highly conserved and often located on mobile genetic elements as well as virulence factors that facilitate their spread [21]. Mobile elements such as insertion sequences and transposons are pivotal in the homologous recombination processes that boost the adaptability and survival of CPE.These genetic elements facilitate the acquisition and dissemination of antimicrobial resistance genes and contribute to the structural rearrangement of plasmids, impacting the fitness and virulence of bacterial pathogens [22,23].For example, insertion sequences like IS26 can initiate the formation of composite transposons that harbor multiple resistance genes, complicating the treatment of infections caused by these bacteria [24,25].Additionally, the incorporation of toxin-antitoxin systems and virulence factors, such as the pemK/pemI, iro, and sit gene clusters, into these plasmids enhances the pathogenic potential of bacteria.These elements help bacteria more effectively colonize host tissues, evade immune responses, and increase the severity of infections [26][27][28][29].Otherwise, both resistance and virulence genes play crucial roles in the formation of biofilms, which significantly contribute to bacterial resistance to antibiotics through multiple mechanisms, including limited antibiotic penetration, nutrient limitation, slow growth, and adaptive stress responses [30,31].Therefore, studying the mobile genetic elements and virulence factors within plasmids provides crucial insights that can help prevent and treat infections associated with clinical multidrug-resistant strains harboring the bla NDM-5 . This study aims to thoroughly characterize a MDR plasmid harboring the bla NDM-5 gene in an E. coli isolate, emphasizing the elucidation of its genetic architecture and the mechanisms underlying the spread of this resistance factor.Specifically, the research focuses on analyzing the origins and consequences of homologous recombination within the MDR region of the plasmid, as well as the roles played by mobile genetic elements and virulence factors in the dissemination of resistance.By gaining a deeper understanding of these mechanisms, this study seeks to contribute substantially to the development of more effective strategies for managing and controlling the spread of CPE infections, thereby addressing an urgent need in the field of antibiotic resistance management. Isolation and identification of strain GZ04-0083 Strain GZ04-0083 was found in a stool specimen collected from a 37-year-old male patient with renal insufficiency who had been undergoing peritoneal dialysis for over five years.The patient was admitted to the intensive care unit (ICU) of Zhongshan City People's Hospital in Zhongshan, Guangdong Province of China due to dialysis-related peritonitis, fever, and severe diarrhea.Fecal swabs were inoculated onto blood agar plates (KMJ, Shanghai, China) and incubated at 37? overnight to promote growth.Using sterile loop, single colony was streaked onto MacConkey agar (CRmicrobio, Jiangmen, China) and incubated at 37? for 18-24 h to isolate gram-negative bacteria as described by Kelly.M. T et al. [32].Suspected E. coli colonies were adjusted to a 0.5 McFarland standard for consistency and subsequently confirmed through biochemical assays using the Vitek-2 compact system (bioMérieux, Marcy-l'Étoile, France).This system evaluates various enzymatic activities characteristic of E. coli, following both the manufacturer's guidelines and the established clinical procedures detailed by Perez-Vazquez, M [33].Ethical approval for this study was obtained from the Clinical Research and Laboratory Animal Ethics Committee of Zhongshan People's Hospital (approval #K2022-008). Antimicrobial susceptibility test The antimicrobial susceptibility was assessed using the Vitek-2 compact system and minimum inhibitory concentration (MIC) test strips.E. coli ATCC 25,922 served as the control strain, with result analysis adhering to Clinical and Laboratory Standards Institute (CLSI) guidelines [34].Briefly, colonies confirmed as E. coli, along with the control strain, were standardized to a 0.5 McFarland turbidity.This suspension was further diluted with 3 ml of 0.45% NaCl and 145 µL of bacterial suspension.The diluted sample was then processed using the AST-N334 card in the Vitek-2 system, according to the manufacturer's instructions. MIC values for various antibiotic classes, including ß-lactams, carbapenems, fluoroquinolones, and aminoglycosides as listed in Table 1, were determined using MTS.Briefly, the bacterial suspension was evenly spread on a Mueller-Hinton agar plate (Detgerm, Guangzhou, China), and MIC test strips (MTS, Liofilchem, Italy) were carefully placed on the agar surface.After incubating at 37 °C for 16-20 h, MIC values were read where the edge of the inhibition zone met the scale on the test strips.Results were categorized as susceptible (S), intermediate (I), or resistant (R).The isolates were classified as MDR, characterized by resistance to at least one agent in three or more antibiotic classes as defined by Magiorakos et al. [35].The Multiple Antibiotic Resistance (MAR) Index was calculated by dividing the number of antibiotics to which the organism shows resistance by the total number of antibiotics tested, in accordance with Pauls et al. description [36] and CLSI guidelines.S1 pulsed-field gel electrophoresis, southern blot, and conjugation experiment S1 endonuclease (Takara, Dalian, China) was used to analyze the bacterial genomic DNA of strain GZ04-0083.Pulsed-field gel electrophoresis (PFGE) was performed in a CHEF-DRIII system (Bio-Rad, Hercules, USA) to isolate DNA fragments and analyze the band patterns to obtain PFGE profiles at 6 V/cm with an initial pulse time of 0.22 s and a final pulse time of 26.29 s for 15 h.The separated plasmid DNA was transferred to a 0.45 µm positively charged nylon membrane (Solabio, Beijing, China) and hybridized with a digoxigenin-labeled DNA probe specific to the bla NDM-5 gene.The southern blot experiment was performed according to the manufacturer's manual of the DIG High Prime DNA Labeling and Detection Start Kit I (Roche, Indianapolis, USA). A conjugation experiment was conducted to assess the transferability of the resistance plasmids.The experiment was conducted with GZ04-0083 as donors and the sodium-azide-resistant E. Coli J53 as recipient.Firstly, GZ04-0083 was mixed with J53 at a ratio of 4:1 using a filter (0.22-µm pore size) mating assay, and the filter was incubated on a BHI agar (KMJ, Shanghai, China) at 37 °C overnight.Secondly, the transconjugants were selected on BHI agar plates supplemented with 4 mg/ml meropenem and 200 mg/ml sodium azide after 72 h of incubation, and the conjugation frequency was calculated as the ratio of transconjugants to recipient cells.Lastly, the plasmids in transconjugants were also confirmed by S1-PFGE. Plasmid sequencing and bioinformatics analysis The whole genomic DNA of GZ04-0083 was extracted from cultured bacteria using the High Pure PCR Template Preparation Kit (Roche, Basel, Switzerland), following the manufacturer's instructions.Secondgeneration and nanopore sequencing was performed through the Illumina MiSeq sequencing platform and the Oxford Nanopore MinION sequencer, respectively. The genomic sequences of the plasmids were annotated using the RAST Server and Protein BLAST (https:// blast.ncbi.nlm.nih.gov/ Blast.cgi) [39,40], and the plasmids were indexed with the ISFinder database (https:// www-is.bioto ul.fr/) [41] for transposons, insertion sequences, and other structures in the genetic environment were identified.Plasmid Finder was used to identify the plasmids for the replication type (https:// cge.cbs.dtu.dk/ servi ces/ Plasm idFin der) [42].The virulence-related genes in bacterial and plasmid genomes were identified by VFDB Blast (http:// www.mgc.ac.cn/ VFs) [43].Inkscape 1.0 was used to map the overall structure of the plasmids, the fine structure comparison of the multi-resistant regions, and the near-source linear structural comparison of the multi-resistant regions of the plasmids.Sequence comparison and map generation were performed using BLAST (http:// blast.ncbi.nlm.nih.gov) and Easyfig (version 2.1), respectively [44]. Phenotypic multidrug resistance of E. coli Strain GZ04-0083 To present the phenotypic characteristics and antimicrobial resistance profile of the E. coli strain GZ04-0083 recovered from a clinical sample, antimicrobial susceptibility test was performed using both AST-N334 susceptibility card read by Vitek-2 and MIC test strips.In the analysis of 19 antibiotics, the E. coli strain GZ04-0083 exhibited resistance to a variety of commonly used antibiotics across different classes.It was resistant to ß-lactams, including ampicillin and ceftriaxone; carbapenems, such as ertapenem and imipenem; fluoroquinolones, including ciprofloxacin and levofloxacin; and aminoglycosides like amikacin and gentamicin.In contrast, the strain remained susceptible to the monobactam antibiotic aztreonam and exhibited intermediate resistance to nitrofurantoin.The detailed resistance profile is presented in Table 1.According to the criteria established by Magiorakos et al., strain GZ04-0083 is classified as a MDR strain, with the MAR index of 0.89. Identification of a bla NDM-5 Carrying Plasmid in GZ04-0083 isolate The antimicrobial susceptibility testing indicated that the GZ04-0083 isolate might resist carbapenems by producing ß-lactamase.To determine the genetic basis of carbapenem resistance in the GZ04-0083 isolate and assess its potential for horizontal gene transfer among Enterobacteriaceae, the S1-PGFE analysis revealed the presence of three plasmids (about 140, 80, and 30 kb, respectively) in GZ04-0083, and the bla NDM-5 gene was found in the 140-kb plasmid by southern blot (Fig. 1and Supplementary Figs. 1 and 2).Further conjugation experiment suggested that the bla NDM-5 -carrying plasmid was able to transfer from the donor strain GZ04-0083 to the recipient E. coli J53 strain with meropenem and sodium azide as co-selection markers, and the conjugation frequency was 3.38 ± 0.82 × 10 -5 per recipient.Accordingly, the GZ04-0083 isolate demonstrated multidrug resistance characteristics, evidenced by its resistance to a broad spectrum of antibiotics, including carbapenems, and the presence of the bla NDM-5 gene on a 140-kb plasmid capable of conjugative transfer to other Enterobacteriaceae. Genome sequencing results confirming the presence of the NDM resistance gene Whole-genome sequencing was conducted to validate the plasmid data and ascertain the molecular type of the E. coli GZ04-0083 strain.By assembling both short Illumina reads and long PromethION reads with Unicycler (v0.4.8), we established that the GZ04-0083 strain's genome consists of a single chromosome (4.91Mb) and three plasmids (146, 89, and 27 kb).MLST analysis classified the GZ04-0083 isolate as ST167, the predominant clone among NDM-producing E. coli strains in China.A BLAST alignment of bla NDM-1 and bla NDM-5 with the GZ04-0083 nucleotide sequence confirmed the presence of the bla NDM-5 gene within the 146-kb plasmid, aligning with the Southern blot findings presented in Fig. 1.A further plasmid MLST analysis identified the 14-kb bla NDM-5 -carrying plasmid as an IncFIB-type plasmid with a linear topology, which was designated as pNDM-5-0083. Genotypic multidrug resistance of E. coli Strain GZ04-0083 To explore the relationship between phenotypic resistance and genotypic features in the GZ04-0083 isolate, our analysis extended beyond the identification of the bla NDM- 5 -carrying plasmid.Table 1 demonstrates that the E. coli GZ04-0083 isolate exhibits a robust multidrug-resistant profile, supported by both phenotypic and genotypic evidence.Chromosomally encoded genes such as gyrA and parC are associated with fluoroquinolone resistance [45], while nfsA and nfsB contribute to decreased susceptibility to nitrofurantoin [46], underlying the strain's inherent resistance mechanisms.Moreover, the pNDM-5-0083 plasmid harbors additional resistance genes, enhancing the isolate's capability to withstand various antibiotics.These genes include bla NDM-5 , which confers resistance to carbapenems, and bla TEM-1B , dfrA12, sul1, aadA2, and rmtB, which collectively provide resistance against ß-lactams, sulfonamides, trimethoprim, and aminoglycosides. To clarify whether the pNDM-5-0083 had fragments of different evolutionary origins, the pNDM-5-0083 plasmid Fig. 1 S1-PFGE and Southern blot of strain GZ04-0083.The left lane is the marker of the S1-PFGE profile of the reference strain; the middle lane of the S1-PFGE is a result of strain GZ04-0083 S1 enzymatically cleaved plasmid DNA; the right lane is southern blot hybridization of the bla NDM-5 -specific probe was BLAST-matched in the NCBI database to screen for plasmids with homology to the pNDM-5-0083 plasmid (Fig. 3).The results revealed that pNDM-5-0083 shared a 99.89% identity with 79% query coverage to pCTXM-2271 plasmid (GenBank accession number MF589339.1)which is known MDR plasmid isolated from E. coli strain 2271 [47], and a 99.96% identity with 83% query coverage to plasmid A (CP010149.1) which was identified in E. coli strain D6 from dog.The high sequence identities but partial query coverages, particularly in the MDR region, indicate that pNDM-5-0083, while closely related to these known plasmids, likely contains unique genetic elements do not present in the aligned plasmids.This partial alignment suggests that pNDM-5-0083 may harbor unique sequences that contribute to its unique resistance profile, meriting further investigation to fully understand its role in antimicrobial resistance. Discussion In recent years, bla NDM-5 -producing E. coli ST167 has been reported across various countries and regions globally [48][49][50][51][52]. Researchers indicates that the bla NDM-5 resistance gene in E. coli tends to spread through a plasmid-mediated horizontal transmission [53], significantly contributing to the rapid proliferation of this resistance gene among E. coli population.Whole-genome sequencing analysis can enhance our understanding of the mechanisms behind the spread of bla NDM-5 , providing detailed insights into the mobile genetic elements that facilitate this transmission and the evolutionary pressures that drive the dissemination of resistance across different environments [54]. The pNDM-5-0083 plasmid harbored seven resistance genes, including bla NDM-5 , bla TEM-1B , dfrA12, suI1, tet(A), aadA2, and rmtB, which has similarities to a strain of E. coli isolated from urine sample carrying bla NDM-5 [55], indicating a rise in the prevalence of MDR E. coli strains.Although pNDM-5-0083 closely resembles pCTXM-2271, another multidrug-resistant plasmid found in E. coli, it shares common elements such as the mobile insertion sequence IS91, the replication initiator repA, plasmid replication-associated gene parA, parB and IncF plasmid conjugative transfer protein trbABCDEGIJ, along with virulence factors like sitABCD and iroBCDEN involved in iron acquisition.However, pNDM-5-0083 lacks the floR resistance gene found in pCTXM-2271 and features a distinct MDR region that sets it apart from pCTXM-2271 [47]. Researchers discovered that the insertion sequences [56,57], transposons [58] and integrons [59] play key roles in facilitating recombination events and extensive transfer of resistance genes.In pNDM-5-0083, the complex transposon structure repA-IsI5DIV-InsBbla TEM1 -RmtB-GroEL-IS26-IS30-bla NDM-5 -ble MBL -IS91 includes the repA element which initiates replication of the MDR region.It is paired with the pemK/pemI toxin-antitoxin system, utilizing postsegregational killing to ensure plasmid stability during bacterial transfer [60].Following this, IS15DIV, a member of the IS6 family similar to IS26 [61], is inserted, promoting duplications of target site sequences of MDR region alongside an integron InsB.This upstream arrangement in the MDR region-repA-pemK/I-IsI5DIV-InsBbla TEM1 -RmtB-is analogous to that in the unnamed plasmid A (CP104348.1).while the sequence IS26-IS30-bla NDM-5 -ble MBL -IS91 is consistent with those in pNDM-d2e9 and pLZ135-NDM, highlighting a common strategy for the horizontal transfer and homologous recombination of the resistance genes bla TEM, bla NDM-5 , and ble MBL under antibiotic pressure.This complex genetic architecture underscores the dynamic capability of IS15DIV, IS26, IS30 and IS91 elements to facilitate the spread and evolution of resistance genes across bacterial populations. In addition, unlike these three homologous plasmids, pNDM-5-0083 contains the transposon Tn21 and integron intI1 at the 3'-end of its MDR region.Previous studies have confirmed that Tn21, known for harboring integrons like intI1 that incorporate aminoglycoside and beta-lactam resistance genes, is widespread among both clinical and environmental gram-negative isolates [62][63][64].This highlights the crucial roles of intI1 and Tn21 as primary vectors in the dissemination of antibiotic resistance genes.The presence of these mobile genetic elements at the terminal end of the MDR region in pNDM-5-0083 suggests they may contribute to frequent homologous recombination events, resulting in a novel variable region compared to its related plasmids.Moreover, another integron insB and the molecular chaperone GroEL are integrated within the MDR region, enhancing the expression, and ensuring the proper folding of antibiotic resistance proteins, respectively.Together, the strategic incorporation of these elements within the MDR region not only augments the functionality of resistance genes but also plays a pivotal role in enhancing plasmid stability.It promotes the horizontal transfer of the multidrug resistance attributes between different plasmids and the widespread dissemination across various bacterial strains, while ensuring the effective and stable expression of resistance determinants. In addition to the resistance genes located on both chromosomal and plasmid DNA, mechanisms of antibacterial resistance in E. coli often involve virulence factors that promote drug efflux and enhance biofilm formation [30,31].Biofilms contribute to antibiotic resistance by creating a protective environment that limits antibiotic penetration and supports the persistence of resistance genes within bacterial communities.In our study, chromosomally encoded virulence genes such as fdeC, yagX/ecpC, and vgrG/tssI were found to enhance bacterial adhesion to host cells, potentially facilitating biofilm formation [65][66][67].Moreover, virulence determinants carried by the pNDM-5-0083 plasmid, such as the iron acquisition systems (e.g., the iro and sit gene families), may indirectly bolster biofilm maintenance by impacting the nutritional status and survival capabilities of the bacteria [68,69].While virulence factors like colicin M and enolase do not directly induce biofilm formation, they augment bacterial resilience to environmental stresses, thereby aiding the transmission and endurance of resistance genes [70,71]. However, our study recognizes certain limitations, particularly the absence of gene editing experiments to confirm the functionality of mobile genetic elements and virulence factors within the plasmid post-annotation.Future research should prioritize experimental studies on biofilm formation to thoroughly investigate the intricate relationships between biofilm development, virulence factors, and antibiotic resistance.This approach will deepen our understanding of the mechanisms that facilitate the persistence and transmission of antibiotic-resistant E. coli strains.Additionally, infection control statistics from our institution from 2017 to 2023 have demonstrated a relatively stable CPE isolation rate, fluctuating between 2.5% and 5.4%.Within this dataset, the highest resistance rates for Klebsiella pneumoniae against imipenem and meropenem reached 5.8% and 6.8%, respectively, while for E. coli, they peaked at 1.2% and 1.4%.Compared to the data from the China Antimicrobial Resistance Surveillance System (CHINET), our institution's rates of carbapenem-resistant Klebsiella pneumoniae are lower, whereas those of carbapenem-resistant E. coli align with national averages.Although the newly discovered bal NDM-5 -carrying plasmid has not increased the CPE isolation rate at our institution, this study still holds significant value for understanding and managing antibiotic resistance in a broader context. Conclusion This study presents a detailed analysis of pNDM-5-0083, a novel bla NDM-5 -carrying plasmid discovered in an E. coli isolate.Our comprehensive genomic annotation and analysis shed light on the evolutionary dynamics of MDR region in pNDM-5-0083 and illuminated the genetic elements which might facilitate the genetic recombination to form a unique plasmid architecture.These insights highlight potential targets for combatting the spread of antibiotic resistance, emphasizing the importance of understanding plasmid dynamics in microbial resistance mechanisms. Fig. 2 Fig. 2 Gene structure of E. Coli GZ04-0083 carrying plasmid pNDM-5-0083.The outer ring represents the annotation of the plasmid, and the genes are annotated with different colors according to their functions.Yellow represents transfer RNA (TransferRNA, tRNA); red represents ribosomal RNA (rRNA); blue-purple CDS (CondingSequence) is in mRNA, protein-coding region; black GC Content represents the content of the plasmid; GC Shew represents plasmid bias and measures the G versus C content of single-stranded DNA in the plasmid: Green GC Shew + represents plasmid single-stranded DNA with greater G content than C. Purple GC Shew-represents plasmid single-stranded DNA with less G content than C. Red frame represents pNDM-5-0083 plasmid carrying resistance genes Fig. 3 Fig. 4 Fig. 3 Genomic comparison of pNDM-5-0083 with pCTXM-2271 (GenBank accession number MF589339.1)as well as the plasmid A plasmid (GenBank accession number CP010149.1).The blue color indicates resistance genes, and each arrow represents one gene.The depth of grey shading indicates the similarity of each part of the genome with the sequence of the pNDM-5-0083 genome.The darker the shade of grey, the higher the degree of similarity.The graph shows fragments with at least 65% similarity, so fragments below this threshold are not shown Table 1 Antibiotic susceptibility results of E. coli GZ04-0083 isolate MIC minimum inhibitory concentration, S susceptible, I intermediate, R resistant a Antimicrobial-resistance genes identified in plasmid annotation b Antimicrobial-resistance genes identified in chromosome annotation
5,194
2024-05-23T00:00:00.000
[ "Medicine", "Biology" ]
Extraction of Mantle Discontinuities From Teleseismic Body‐Wave Microseisms Ocean swell activities excite body‐wave microseisms that contain information on the Earth's internal structure. Although seismic interferometry is feasible for exploring structures, it faces the problem of spurious phases stemming from an inhomogeneous source distribution. This paper proposes a new method for inferring seismic discontinuity structures beneath receivers using body‐wave microseisms. This method considers the excitation sources of body‐wave microseisms to be spatially localized and persistent over time. To detect the P‐s conversion beneath the receivers, we generalize the receiver function analysis for earthquakes to body‐wave microseisms. The resultant receiver functions are migrated to the depth section. The detected 410‐ and 660‐km mantle discontinuities are consistent with the results obtained using earthquakes, thereby demonstrating the feasibility of our method for exploring deep‐earth interiors. This study is a significant step toward body‐wave exploration considering the sources of P‐wave microseisms to be isolated events. Introduction Microseisms are random seismic wavefields with a frequency range of 0.05-0.50Hz excited by ocean swell activities (e.g., Nishida, 2017) and can be categorized into primary microseisms (PMs; <0.1 Hz) and secondary microseisms (SMs; >0.1 Hz).PMs are excited by topographic coupling between surface ocean gravity waves and seismic waves (Hasselmann, 1963), whereas SMs are excited by the nonlinear effects of surface ocean gravity waves (Longuet-Higgins, 1950).Although Rayleigh and Love waves dominate the SMs, teleseismic P-waves have also been observed (e.g., Gerstoft et al., 2008).It has long been understood that these random wavefields represent noise in earthquake seismology. In the late 2000s, seismic interferometry (SI) turned the noise into a signal elucidating the Earth's internal structures (e.g., Snieder & Larose, 2013).SI is a technique for extracting wave propagations between seismic stations by calculating the cross-correlation of random seismic wavefields of station pairs.Because the random wavefield is excited by distributed noise sources, only noise sources within the stationary phase regions contribute constructively to the cross-correlation function.Because this assumption is more valid for surface waves, surface-wave exploration became widely used in the late 2000s (e.g., Shapiro & Campillo, 2004). 10.1029/2023GL105017 2 of 8 prohibiting the accurate reconstruction of wave propagations (e.g., Li et al., 2020;Pedersen & Colombi, 2018).Such studies showed that for estimating seismic structures using P-wave microseisms, considering their source locations is important.Back projection before cross-correlation is one of the possible solutions for improved extraction (e.g., Liu & Shearer, 2022). This study employs a strategy that is different from SI. Instead of assuming distributed sources, P-wave microseisms are assumed to be spatially isolated events (within several hundred kilometers) with persistent excitation (∼6 hr).We developed a new method for estimating seismic structures using P waves from the centroid locations of P-wave microseisms, focusing on the P-s conversion beneath the receivers.To detect this conversion, we generalized a receiver function method for earthquake data (e.g., Langston, 1979) to P-wave microseisms.Receiver function analysis using earthquakes sometimes involves narrow azimuthal coverage from inhomogeneous hypocenters.Utilizing P-wave microseisms in the generalized receiver function (gRF) analysis can improve the azimuthal coverage and broaden the illuminated area. This study targets mantle discontinuities at 410/660 km depth.The structure of mantle discontinuities has been inferred by several methods (Kind & Li, 2015), such as the precursor of PP and SS phase (e.g., Shearer & Masters, 1992), ScS reverberation (e.g., Revenaugh & Jordan, 1991), and receiver functions (e.g., Tonegawa et al., 2005).We will discuss the feasibility of our method by comparing them with previous studies. Data We used seismograms from 690 seismic stations (Figure 1b) of the high-sensitivity seismograph network (Hi-net: https://doi.org/10.17598/NIED.0003;Okada et al., 2004) deployed by the National Research Institute for Earth Science and Disaster Resilience (NIED).The vertical and horizontal components of the velocity meters with a natural frequency of 1 Hz were deployed at the bottom of the boreholes of the stations.We analyzed the data after eliminating the coherent periodic noise originating from the logger (Takagi et al., 2015) and correcting for the instrumental response in the time domain (Maeda et al., 2011). To detect the P-s conversion, we analyzed teleseismic P-wave microseisms from 0.10 to 0.25 Hz.This frequency range was determined by the narrow spectral content of P-wave microseisms (e.g., Figure 3 of Nishida & Takagi, 2016) due to the acoustic resonance of the ocean (Gualtieri et al., 2014).Assuming that the excitation (Nishida & Takagi, 2022).The contour lines show the equidistant circles at every 10° from the N.HNSH station (34.7283°N 134.2744°E,Okayama Prefecture, Japan) of Hi-net.(b) Blue triangles denote the Hi-net stations used.(c) The ray paths of P410s (blue line) and P660s (green line) at the epicentral distance of 60° with AK135 (Kennett et al., 1995).The star denotes the P-wave source location, whereas the red triangle denotes the station. 10.1029/2023GL105017 3 of 8 sources were persistent in time but localized in space, they were approximated using a persistent vertical single force at the centroid location.Nishida and Takagi (2022) constructed a centroid single-force catalog of P-wave microseisms from 2004 to 2020 using Hi-net data.To construct the catalog, they developed an autofocusing method that utilizes information on both the slowness and wavefront curvature.Although the catalog also includes PP and PKIKP events, the number of events with high signal-to-noise ratios is smaller than that of P events.Accordingly, we mainly focused on P events.Each event in the catalog includes the event date, centroid location, centroid single force, P-wave beam power, and median absolute deviation of the beamforming result.We chose 5,780 P events with high signal-to-noise ratios.The signal-to-noise ratio was defined as the ratio of the P-wave beam power to the median absolute deviation of the beamforming result.The centroid locations of the P-wave microseisms were classified as the Northern Atlantic Ocean, Northern Pacific Ocean, and Southern Pacific Ocean (Figure 1).The histogram of epicentral distances revealed peaks at 30-40° and 90-100° (Figure S1a in Supporting Information S1). Calculation of gRFs of P-Wave Microseisms The gRF analysis for an earthquake was generalized for persistent P-wave microseisms.The source time function of P-wave microseisms is typically several hours to several days (e.g., Zhang et al., 2010), whereas those of most earthquakes are shorter than several minutes.This section describes the procedure for generating gRFs using P-wave microseisms.The centroid location of the P-wave microseisms was fixed at 6 hr.Six-hour-long waveforms were split into 1,024-s time windows.We selected time windows with low mean squared amplitudes of 0.05-0.10Hz (i.e., less than 7.6 × 10 4 nm 2 /s 2 ) and 0.10-0.20 Hz (i.e., less than 2.5 × 10 4 nm 2 /s 2 ) to avoid contamination from local earthquakes and local Rayleigh waves.In this frequency range, locally excited Rayleigh waves dominate the records in Japan because teleseismic Rayleigh waves attenuate with propagation owing to scattering and intrinsic attenuation.Indeed, the relative amplitude of P waves to Rayleigh waves is anti-correlated with the mean squared amplitude in the 0.1-0.2Hz range in Japan (Takagi et al., 2018).Although data selection according to the H/V ratio at inland stations is feasible for enhancing teleseismic P-wave microseisms (e.g., Pedersen et al., 2023), we chose data with a low mean squared amplitude. For each time step, the Fourier spectrum of the source time function of the incident P-wave P(ω) was estimated by stacking vertical components all over the stations, along with the theoretical P-wave travel time.Stacking the vertical components of all stations enhanced the relative amplitude of body waves to Rayleigh waves.The theoretical P-wave travel time T ip was calculated using the tauP toolkit (Crotwell et al., 1999) for the 1-D velocity model AK135 (Kennett et al., 1995), with correction for the 3-D velocity model (0-60 km depth) beneath Japan (Nishida et al., 2008), assuming a vertical incident.The Fourier spectrum of the source time function of the jth time step is given by where N is the number of seismic stations and, Z ij is the vertical component of the jth time window at the ith station.The gRF of P-wave microseisms RF i at frequency ω is evaluated by minimizing the squared difference S i (ω) between the radial component R ij (the ith station and jth time window) and RF i ⋅ P j as, where Nt is the number of time steps.At each frequency, from the condition that the partial derivatives of S i with respect to the receiver function are zero, the radial gRF of P-wave microseisms RF i can be calculated as where w is the 5% water level to avoid instability.The numerator is the power spectrum of the incident P-wave * , whereas the denominator is the cross-spectrum of the incident P-wave P j and the radial component R ij .In the case of only a one-time window, this equation is equivalent to the typical receiver function, including the water level.The vertical component of the gRF is calculated in the same manner, replacing R ij (ω) by Z ij (ω) in Equation 3. All the radial gRFs for all events were linearly binned-stacked to enhance the signal-to-noise ratio.The width of the distance bin was 25 km.All radial gRFs RF i were normalized by the peak amplitude of the vertical gRFs ZF i , and aligned at the peak time of ZF i (e.g., Zhang et al., 2010).Quality control of the gRFs was performed before stacking, rejecting those with large amplitudes of 50-500 s before the P peak.The radial gRFs using PP events were calculated in the same manner, while those using PKIKP events were calculated in a slightly different manner (Supporting Information S1).Because the number of events was less than the number of P events, results are shown in Supporting Information S1 (Figures S3-S6). Binned Stack of gRFs The binned stack of radial gRFs shows the P-s converted waves at the mantle discontinuities with an epicentral distance of 3,000-11,000 km (Figure 2).The relative arrival times of these waves were consistent with the theoretical travel times of AK135. The ringing shape of P waves varies with the epicentral distance because the narrow-band spectral content of SMs is affected by the ocean depth in the source area because of the resonance of the water layer (e.g., Gualtieri et al., 2014).The amplitude variation of vertical gRFs before the P peaks depends on the stacking number (Figure S2 in Supporting Information S1).The binned stack of the vertical gRFs shows no clear PP-and P-reflected waves at the mantle discontinuities (PP410P/PP660P). Possible reasons for the emergence of P-s converted phases in the radial components and the lack of these reflection phases in the vertical components are as follows: First, the reflection phases being included in the source time function because of a small difference between the PP410P and PP660P slowness and that of P (∼0.0005 s/ km with an epicentral distance of 80°); and second, the reverberation at the bounce point causing destructive interference for the reflected phases.However, the P-s converted phases were free of these reasons. 1-D Depth Migration All the radial gRFs were depth-migrated with the 1-D structure of AK135.Assuming that the radial gRFs consist of P-s converted waves, all gRFs with epicentral distances greater than 30° were stacked along the theoretical travel time of the P-s converted waves at depths of 200-1,000 km.Travel time was calculated using the tauP toolkit (Crotwell et al., 1999).Four groups (all events, Northern Atlantic events, Northern Pacific events, and Southern Pacific events) of gRFs were depth-migrated (Figure 3).At a shallow depth (200 km), results were 10.1029/2023GL105017 5 of 8 characterized by large amplitudes, which can be explained by contamination by the ringing of the P waves.All results showed common positive peaks around the depths of 410 and 660 km.The depth of peak amplitude was somewhat deep because of the difference in the crustal structure between Japan and AK135. The amplitudes of the P-s waves were compared to previous seismic observations.The P-s transmission coefficients estimated using Japanese broadband stations are 2.7 ± 0.8% and 6.0 ± 0.7% (Kato & Kawakatsu, 2001), while using the PREM parameters (Dziewonski & Anderson, 1981), they are 1.5% and 4.6% (assuming the near vertical incident with slowness 0.06 s/km, based on Equation 4. 3.34 of Saito, 2016).In this study, we obtained the P-s transmission coefficients of the 410-and 660-km discontinuities as 0.8%-1.8%and 0.7%-1.5%,respectively.These values are smaller than those reported in previous studies, with the P-s transmission coefficient at 660 km depth being smaller than that at 410 km depth.Waveform distortion due to the high water level during deconvolution may decrease the amplitude of P-s converted waves.However, waveform distortion alone cannot explain the smaller amplitude of P660s compared with that of P410s.A plausible reason is the sharpness of the 410-and 660-km discontinuities, that is, the frequency dependence of the P-s transmission coefficients because this study used a relatively higher frequency band (0.10-0.25 Hz) than those in previous studies (0.005-0.200Hz).The widths of the mantle discontinuities may be related to the water content of the transition zone (e.g., Helffrich & Wood, 1996).Our results are consistent with the observed frequency dependence of the P-s conversion at the 660 km discontinuity reported previously (Tonegawa et al., 2005).Another possible reason is that the depression of the 660-km discontinuity (a ∼40 km depression in southwest Japan; Tono et al., 2005;Tonegawa et al., 2006) may make P660s incoherent for stacking. 3-D Depth Migration The migration of all the radial gRFs with the 3-D structure was calculated.Assuming that the radial gRFs consisted of P-s converted waves, all the gRFs with epicentral distances greater than 30° were projected to the 100-1,000 km depth domain.Figure 4 shows a cross-section using these depth-projected gRFs.At each point on the cross-section, the gRFs were stacked with a horizontal distance smaller than 1°.The image of the cross-section shows the horizontally continuous 410-and 660-km discontinuities.The 410-km discontinuity has a gap where the Pacific plate (red line in the figure) crosses the 410 km depth.The 660-km discontinuity is depressed where the Pacific slab stagnates above the 660-km discontinuity.However, the Pacific plate is not evident in the cross-section because the spatial stacking length is too large to image the subducting plate surface (Kawakatsu & Yoshioka, 2011).10.1029/2023GL105017 6 of 8 The cross-section from the resultant gRFs was compared with the stacked image from conventional receiver functions using an earthquake (Tonegawa et al., 2006).Because the locations of both the cross-section and frequency band are similar, the cross-section of Tonegawa et al. ( 2006) is adequate for reference.The signatures of the mantle discontinuities were common in both images, demonstrating the reliability of this method. Applicability of This Method to Other Seismic Arrays The relative amplitude of body waves to surface waves of microseisms is one of the factors determining the signal-to-noise ratio in this method.The waveforms of the Hi-net stations are dominated by surface waves excited by the sea around Japan, whereas those of stations in the continental region are less dominated by surface waves owing to scattering decay (e.g., Vinnik, 1973).An inland array is preferable; however, our method still requires more than 500 stations to accurately extract the incident P-wave microseism.It also requires long-term observations, with time periods of over 10 years, and a sufficient number of events.Because dense observations have become popular in the last decade, our method could be feasible for extracting P-s converted waves using seismic networks on the continent. Conclusions We performed a generalized receiver function analysis of P-wave microseisms.Contrary to the SI assumption, this method assumes that the sources of body-wave microseisms are persistent in time and spatially localized.This strategy constitutes a solution to the spurious phase problem of body-wave extraction using SI.The resultant migration image showed the extraction of the P410s/P660s from the radial gRFs.The stacked image from the extracted P-s converted waves was consistent with the images from the conventional receiver function analysis using earthquakes.The results demonstrate the feasibility of our method using body-wave microseisms for exploring the Earth's deep interior, while this method can also be applied to other modern arrays.This study is a significant step toward seismic exploration while considering the sources of P-wave microseisms to be isolated events; it has the potential to extract information not only from the receiver side but also along the path, including the source side. Figure 1 . Figure 1.Centroid distribution of the mantle P-wave microseisms used in this study.(a) Red dots denote 5,780 P-wave source locations (Nishida & Takagi, 2022).The contour lines show the equidistant circles at every 10° from the N.HNSH station (34.7283°N 134.2744°E,Okayama Prefecture, Japan) of Hi-net.(b) Blue triangles denote the Hi-net stations used.(c)The ray paths of P410s (blue line) and P660s (green line) at the epicentral distance of 60° with AK135(Kennett et al., 1995).The star denotes the P-wave source location, whereas the red triangle denotes the station. Figure 2 . Figure 2. Binned stack of all the radial gRFs of all seismic stations.Each stacked trace is normalized by the maximum value.The peak at 0 s is the P-wave.Two dashed lines show the theoretical travel times of P410s and P660s from AK135.The figure shows these phases along each line.The right panel shows the number of radial gRFs used in the binned stack. Figure 3 . Figure 3. (a) Depth migration result of the radial gRFs converted from 200 to 1,000 km depth.Red, blue, green, and black lines indicate the results using all the events, events in the Northern Atlantic, events in the Northern Pacific, and events in the Southern Pacific, respectively.Peaks exist around the depths of 410 and 660 km.(b) Enlarged view in the 370-450 km depth range in panel (a).(c) Enlarged view in the 600-720 km depth range in panel (a). Figure 4 . Figure 4. Cross-section (A-A′) of the depth-projected radial gRFs.(a) Locations of the cross-section and stations (blue triangles) used in this study.(b) Stacked image of the A-A′ cross-section from 100 to 1,000 km depth.The red line denotes the depth of the top of Pacific slabs (Nakajima & Hasegawa, 2006; Nakajima et al., 2009).The shallow part near A′ is muted due to the poor ray path coverage.(c) Locations of the cross-section and stations (black triangles) of Tonegawa et al. (2006).(d) Stacked images from receiver function analysis of earthquakes (Tonegawa et al., 2006).This result used a 0.16 Hz low-pass Gaussian filter.OMH refers to the oceanic Moho, and LBP shows the lower boundary of the Pacific slab.Panels (c) and (d) were modified from Figures 1a and 6b of Tonegawa et al. (2006).
4,202.8
2023-09-19T00:00:00.000
[ "Geology" ]
IDUC: An Improved Distributed Unequal Clustering Protocol for Wireless Sensor Networks Due to the imbalanced energy consumption among nodes in wireless sensor networks, some nodes die prematurely, which decreases the network lifetime. To solve this problem, existing clustering protocols usually construct unequal clusters by exploiting uneven competition radius. Taking their imperfection on designing the uneven competition radius and intercluster communication into consideration, this paper proposes an improved distributed unequal clustering protocol (IDUC) for wireless sensor networks, where nodes are energy heterogeneous and scattered unevenly. The cores of IDUC are the formation of unequal cluster topology and the construction of intercluster communication routing tree. Compared with previous protocols, IDUC is suitable for various network scenarios, and it can balance the energy consumption more efficiently and extend the lifetime of networks significantly. Introduction A wireless sensor network (WSN) consists of plentiful lowpower sensor nodes capable of sensing, processing, and communicating. These sensor nodes observe the environment phenomenon at different points in the field, collaborate with each other, and send the monitored data to the base station (BS). As sensor networks have limited and nonrechargeable energy resources, energy efficiency is a very important issue in designing the network topology, which affects the lifetime of WSNs greatly. Thus, how to minimize energy consumption and maximize network lifetime are the central concerns when designing protocols for WSNs. In recent years, clustering has been proved to be an important way to decrease the energy consumption and extend lifetime of WSNs. In clustering scheme, sensor nodes are grouped into clusters, in each cluster, a node is selected as the leader named as the cluster head (CH) and the other nodes are called cluster members (CMs). Each CM measures physical variables related to its environment and then sends them to their CHs. When the data from all CMs arrive, CHs aggregate data and send it to the BS. Since CHs are responsible for receiving and aggregating data from their CMs and then transmitting the aggregated data to the specified destination, the energy consumption of which is much higher than that of CMs. To solve this problem, most clustering algorithms divide the operation into rounds and periodically rotate the roles of CHs in the network to balance the unequal energy consumption among nodes. However, there exists another problem; that is, energy consumption among CHs is also imbalanced due to the distance to the BS. In single-hop networks, CHs farther away from the BS need to transmit data to a long distance. Thus, the energy consumption of these CHs is larger than that of CHs closer to the BS. In multihop networks, CHs closer to the BS undertake the task of forwarding data, which means that the energy consumption of CHs closer to the BS is larger. The imbalanced energy consumption of nodes leads to a certain number of nodes dying prematurely, causing network partitions. To solve this problem, researchers design unequal clustering algorithms to balance the energy consumption among CHs. In this paper, aiming at energy heterogeneous networks where nodes are deployed unevenly, a more practical network case, we propose an improved distributed unequal clustering Related Works Since the energy consumption of CHs is much larger than that of CMs, in order to balance the energy among nodes, most clustering protocols adopt a rotation mechanism of CHs. The rotation methods used by the existing clustering algorithms can be divided into time-driven rotation and energy-driven rotation. In time-driven clustering algorithms [1][2][3][4][5], the role of the CH is rotated in the entire network periodically according to a predetermined time threshold. As each rotation is carried out in the entire network, the large overhead of recluster causes a lot of unnecessary energy waste. In energy-driven clustering algorithms [6][7][8][9][10][11], the role of CH is rotated when the residual energy of CH is less than a threshold. Recluster process only happens in local area; thus the large cost of global topology reconstruction can be avoided. However, aside from the imbalance energy consumption among CHs and CMs, there also exists another imbalance consumption phenomenon among CHs that can impact the network lifetime significantly. To solve this problem, many unequal clustering algorithms have been proposed. The unequal clustering algorithms proposed in [12][13][14] all divide the network field into cirques. In [12], clusters in the same cirque have the same size, whereas clusters in different cirques have different sizes. Some high-energy nodes are deployed to take on the CH role to control network operation, which ensures that the energy dissipation of nodes is balanced. In [13], a cirque-based static clustering algorithm for multihop WSNs is proposed. Clusters closer to the BS have smaller sizes. Utilizing virtual points in a corona-based WSN, static clusters with dynamic structures are formed in ERP-SCDS [14]. The communication way of CHs in the distributed clustering protocol EECS [15] is single-hop, and the protocol adopts a weighted faction to control the numbers of CMs to construct unequal clusters. That is, the cluster size is smaller if it is farther away from the BS, vice versa. EEUC [16] is also a distributed unequal clustering algorithm with intercluster multihop communication, which elects CHs based on the residual energy of nodes. Each node becomes a tentative CH with a probability . However, the competition radius used by EEUC is not ideal for heterogeneous WSNs, and since the quality of the generated CHs is affected by , there also exists "isolate points" in EEUC in some cases. LUCA [17] is similar to EEUC but presents more accurate theoretical analysis of optimal cluster size based on the distance between the CH and the BS. In [18], we proposed EADUC to overcome the defects of EEUC. When designing the competition radius, besides the distance between nodes and the BS, the residual energy of nodes is also taken into account. That is, CHs closer to the BS and possessing lower residual energy have smaller cluster sizes to preserve some energy for the intercluster data forwarding; thus the cluster size is more reasonable and more suitable for heterogeneous WSNs. Simultaneously, EADUC overcomes the "isolate points" problem. In [19], we proposed ECDC, in this algorithm, different coverage importance metrics are designed for different practical applications. We select cluster heads based on the relative residual energy and the coverage importance metrics of nodes. The intercluster communication adopts multihop forwarding mechanism. This algorithm can construct a better clustering topology with lower energy dissipation and better coverage performance through less control information. These protocols described above, such as EEUC, only consider the distance between nodes and the BS, which is not suitable for heterogeneous networks; thus EADUC and ECDC also take residual energy of nodes into account besides the distance factor. However, they all overlook the distribution of nodes in WSNs, and it is not always effective to apply these algorithms into networks where nodes are scattered unevenly. Aiming at this problem, what we need to do is design a protocol, which is suitable for various network scenarios, an improved distributed unequal clustering protocol (IDUC) is proposed in this paper. IDUC is effective in both heterogeneous and homogeneous network scenarios. In addition, it is suitable for WSNs where nodes are scattered evenly or unevenly. Our main contribution in the paper is as follows. (1) A new cluster head competition radius is proposed; it considers the distance among nodes and the BS, the residual energy of nodes, and the number of neighbor nodes within the nodes' communication range. (2) To meet the gap between the number of nodes within the communication ranges and finally cluster ranges, when designing the intercluster routing tree, CHs will choose CH nodes that possessing higher energy and fewer CMs as their next hops. Network Model. To simplify the network model, we adopt a few reasonable assumptions as follows. (1) There are sensor nodes that are distributed in an × square field. (2) The BS and all nodes are stationary after deployment. (3) All nodes can be heterogeneous. (4) All nodes are location-unaware. (6) The BS is out of the sensor field. It has enough energy, and its location is known by each node. (7) Each node has a unique identity . To transmit an -bit data to a distance , the radio expends energy is where is the transmission distance, elec , fs , and mp are parameters of the transmission/reception circuit. According to the distance between the transmitter and receiver, free space fs or multipath fading mp channel models is used. While receiving an -bit data, the radio expends energy is 3.2. Problem Description. As described above, some clustering protocols construct unequal clustering topology by uneven cluster head competition radius. However, these protocols, such as EEUC, only consider the distance between nodes and the BS, which is not suitable for heterogeneous networks; thus EADUC also takes residual energy of nodes into account besides the distance factor. Nonetheless, if we applied these algorithms into networks where nodes are scattered unevenly, such case is very likely to appear as shown in Figure 1, if the distance between and BS is near to the distance between and BS; meanwhile, the residual energy of and is also approximate, and it is notable that the number of CMs within the cluster range of is much larger than , which can also lead to the imbalanced consumption of and . Meanwhile, in most practical applications, the deployment of nodes in networks is not always uniform, as shown in Figure 2. clustering algorithms are inclined to be designed based on networks as Figure 2(a), an ideal network model, whereas these networks, as shown in Figure 2(b), are often neglected; since nodes are unevenly scattered, the nodes density is different in different area of the network. In such scenario, case appearing in Figure 1 easily happens when we applied existing clustering protocol. Thus, we need to control the number of CMs of each cluster; that is, if nodes have more communication neighbor nodes, their cluster competition radii should be smaller, vice versa. In fact, it is easy to obtain a method to solve this problem, as shown in Figure 1, that is, to reduce the competition radius of , and to increase the competition radius of , correspondingly. With the adjustment of competition ranges, the numbers of CMs covered by and are all adjusted to be more reasonable. Thus, it is necessary to design a new CH competition radius for such networks, besides the distance from the nodes to BS and the residual energy of nodes, we also take the number of neighbor nodes within the communication range of nodes into account. However, we have to admit that the number of neighbor nodes within the node initial communication range is very likely to be not equal with the number of CMs within its final cluster range. Thus, to further balance the consumption among CHs, when we construct the intercluster multihop routing tree, each CH needs to count the number of its CMs, and then it chooses the neighbor CH with fewer CMs and higher residual energy as its next hop. IDUC Details The whole operation is divided into rounds, where each round consists of a cluster setup phase and a data transmission phase. In the cluster setup phase, a clustering topology is formed, and, in the data transmission phase, a new routing tree is constructed to forward data. To save energy, the data transmission phase should be longer than the cluster setup phase. The descriptions of node states and several control messages are shown in Table 1, respectively. Cluster Setup Phase. In the network deployment phase, the BS broadcasts a signal, and each node can compute its approximate distance to the BS based on the received signal strength; this step is necessary when designing an unequal distributed clustering algorithm. The following is the cluster setup phase. The first subphase of this phase is information collection phase, whose duration is set as 1 . At the beginning of this phase, each node broadcasts a message within its communication range , and the message contains the node and its residual energy. Meanwhile, the node will receive from its neighbor nodes, and each node calculates the average residual energy of its neighbor nodes by using the following formula: where denotes the number of neighbor nodes of and denotes the residual energy of the th neighbor of . For any node , it calculates its waiting time for broadcasting the message according to the following formula: where is a real value randomly distributed in [0.9, 1], which is introduced to reduce the probability that two nodes send at the same time. After 1 expires, it starts the next subphase, cluster head competition phase, whose duration is set as 2 . In this phase, for any node , if it receives no when time expires, it broadcasts the within competition range to claim that it will be a CH. Otherwise, it gives up the competition. In order to generate unequal clusters, these nodes need to calculate their own competition radius . In [15], based on the distance between nodes and BS, the formula of is as follows: where max and min are the maximum and minimum distance from nodes to the BS, ( , ) is the distance from node to the BS, is a weighted factor whose value is in [0, 1], and max is the maximum value of competition radius. By analyzing the formula (5), we can obtain that a larger ( , ) can generates a larger , which can guarantee that CHs farther away from the BS will control larger cluster areas, whereas CHs closer to the BS can control smaller cluster areas. In heterogeneous networks, nodes have heterogeneous initial energy. In the case that each node has the same energy consumption, nodes with low initial energy will die prematurely, reducing the network lifetime. In order to take full advantage of high-energy nodes, these high-energy nodes should take more tasks. Therefore, considering both the distance from nodes to the BS and the residual energy of nodes, we gave an improved formula of in EADUC [18] as follows: where and are the weighted factors in [0, 1] and is the residual energy of node . From the above formula we can see that the competition radius of the node is determined by ( , ) and . Formula (6) means that CHs with higher residual energy and farther away from the BS will control larger cluster area. However, the cluster competition radius designed above are not suitable for all networks where nodes are scattered unevenly, especially when the distance between these nodes and BS is similar, and the residual energy of these nodes is also approximate. Thus, we need to design a new competition radius to avoid imbalanced energy consumption in such case. Meanwhile, another remarkable problem generated in EADUC is that there is no restriction on the relation of and ; thus, in such case where both ( max − ( , ))/( max − min ) and 1 − ( / max ) are large and their weighted factors are also large, the we obtain is likely to be a negative value, which is not meaningful in practical applications; therefore, it is necessary to give a limit on the relation of and . Aiming at above disadvantages of existing , we propose a new cluster head competition radius , which is set as follows: where denotes the number of nodes in the network and is the number of neighbor nodes within the communication range of . , , and is the weighted factors in [0, 1], and we set + + ≤ 1. Formula (7) means that CHs closer to the BS, with lower residual energy and more communication neighbor nodes will have smaller cluster size. In conclusion, firstly, CHs closer to the BS can save energy for data forwarding. Secondly, CHs with lower residual energy dominating smaller clusters can avoid their premature death and prolong the network lifetime. Thirdly, CHs with more communication neighbor nodes control smaller clusters, which makes the competition radius more suitable for nonuniform networks. Obviously, in formula (7) makes IDUC suitable for various network scenarios. (1) If the network is energy homogeneous, we can set = 0 and + ≤ 1. (2) If the distribution of nodes in the network is uniform, we can set = 0 and + ≤ 1. (3) If nodes in the network is energy homogeneous and the distribution is nonuniform, we can set + + ≤ 1. According to practical network applications, we can adjust , and , and to be the optimal value to extend the network lifetime. When 2 expires, the next subphase is the cluster formation phase, whose duration is 3 . In this phase, each plain node chooses the nearest CH and sends the , which contains the and its residual energy. According to the received , each CH creates a node schedule list including the ℎ for its CMs. At this point, the entire cluster setup phase is completed. Algorithm 1 give the details of the whole cluster setup phase. Data Transmission Phase. In the data transmission phase, each CM collects local data from the environment periodically and then sends the data to the CH within its time slot according to the TDMA scheduling list to avoid the collisions among the members in the same cluster. When data from all the member nodes has arrived, the CH aggregates the data and sends it to the BS. Thus, this section is divided into two subphases, and -. CMs sense and collect local data from the environment, and send the collected data to CHs. This process is called -. For simplification, CMs communicate with CHs directly, just like LEACH. In -phase, we will construct a routing tree on the elected CH set, each CH will forward these data they have collected and aggregated from their CMs to the BS by other CHs. This multihop communication from CHs to the BS will further reduce and balance the energy consumption. Several nodes need to be selected as child nodes of the BS from all CHs and communicate with the BS directly. Therefore, each CH determines whether to be selected as the child node of the BS depending on its distance to the BS according to a threshold Euclidean distance . If the distance from CH to the BS is less than , communicates with the BS directly, and sets the BS as its next hop. Otherwise, it communicates with the BS through a multihop routing tree. The concrete process is as follows. We set the duration as 4 . At the beginning, each CH broadcasts a message within the radio radius with the values of the , the residual energy, and the distance to the BS. To ensure the connectivity of all CHs, we set the radio radius = 3 . If the distance from CH to the BS is less than , it chooses the BS as its next hop. Otherwise, it chooses its next hop according to these received . CH chooses the neighbor CH with higher residual energy, fewer CMs, and no farther away from the BS as its next hop. We give the formula of "Cost" when CH chooses CH as its next hop as follows: where denotes the residual energy of CH , denotes the number of CMs of . is a random value in [0, 1], and it is used to determine which factor is more important in choosing the next routing node. We can obtain from (8) that, nodes with higher residual energy and fewer CMs have larger cost value. To visually demonstrate the construction of inter-cluster communication routing tree, we give an instance shown in Figure 3. Node 1 chooses its next hop CHs which are closer to the BS than it, here only 4 is chosen. For 2 , when it chooses its next hop based on the distance to the BS, 1 , 4 and 5 are selected as candidate relay nodes, since 5 has the maximum cost, 5 is finally selected. For 4 , firstly 7 and 9 are selected, since cost( 7 ) > cost( 9 ), 7 is finally selected. For 9 , 10 and 11 , since their distances to the BS is smaller than DIST, they communicate with the BS directly. Algorithm 2 give the details of this phase. Protocol Analysis Theorem 1. There is at most one within each cluster competition radius . Proof. As we state previously, formula (4) ensures that different nodes have different waiting time. Assume that node has a shorter waiting timer than others and broadcasts the within radius . Thus, all nodes within this range will give up the competition and become plain nodes. Therefore, there is no more than one CH within the radius of any CH. From Theorem 1 and the proof, we can see that nodes with relatively higher energy are elected as CHs, and there is one and only one CH within the competition radius of any CH. Theorem 2. The cluster head set generated by the IDUC algorithm is a dominating set, which can cover all the network nodes. Proof. According to Theorem 1, there is no more than one CH within a cluster, so the cluster head set must be an independent set. After the execution of the IDUC algorithm, each node in the network either is the CH, or the member node of one cluster, any plain node adding to the cluster head set will destroy its independence. Hence, the cluster head set is a maximal independent set. Since a maximal independent set is also a dominating set, the cluster head set generated by the IDUC algorithm is a dominating set. Therefore, we conclude that the waiting time of any node is smaller than 2 . That is, any expected CH will broadcast a and become a CH before 2 expired, which can avoid the generation of "isolate points. " Proof. At the beginning of each round, each node broadcasts a . Thus, there are in the whole network. In each round, each CM broadcasts a , while each CH broadcasts a , a ℎ , and a . Suppose the number of generated CHs is , then the total number of is − , and the numbers of , ℎ , and messages are all . Thus, the total number of control messages in the entire network is + ( − ) + + + = 2 + 2 . Therefore, the message complexity of control messages in the network is ( ). IUDC adopts a distributed clustering strategy. Thus, the time complexity of the entire network is equal to that of a single node (1). In other words, the time complexity is a constant and has nothing to do with the network size. Taking a comprehension analysis of IDUC, we can summarize the advantages of IDUC as follows. (1) Nodes with relatively higher energy are elected as CHs; thus the frequency of recluster will be lower, which is helpful to reduce the energy consumed in reclustering. (2) There are no "isolate points" in the clustering topology generated by IDUC, which is proved in Theorem 3. (3) The design of competition radius takes the distance from nodes to the BS, the residual energy of nodes, and the number of neighbor nodes into account. Thus the setting of is more reasonable and suitable for both uniform networks and nonuniform networks. (4) From Theorems 1 and 3, there is one and only one CH within the competition radius of any CH. (5) The construction of the new routing tree takes the distance from CHs to the BS, the residual energy of CHs, and the number of CMs covered by CHs into account, which makes the IDUC more suitable for heterogeneous and nonuniform networks. Simulations The simulation is performed in − 2, and every simulation result shown in our paper is the average of 50 independent experiments unless otherwise specified. Each experiment is done in different scenarios and two scenarios are chosen to be shown as follows. Table 2. we run the cluster setup algorithm of IDUC in Scenario 2. The clustering topology gained is shown in Figure 5. It is obvious that the cluster competition radius is more reasonable when we set ( = 0.3, = 0.3, = 0.4), it contributes to an even clustering topology. That is, the design of in formula (7) avoids imbalance energy consumption among CHs due to the nonuniform distribution of nodes. Cluster Head Distribution. By analyzing formula (7), we can draw the conclusion that , , and max are all impact factors which can influence the competition radius of CHs. max is the key parameter to determine the number of CHs generated by IDUC, while , , and determine the weights of nodes' distance to BS, nodes' residual energy, and nodes' communication neighbors number in designing , respectively. We run IDUC in Scenario 2. In this section, we first set = 0. Figure 6, and it shows the relationship between the number of CHs generated by IDUC and the cluster maximal competition radius max . As Figure 6 shown, the curve of = 0.3, = 0.3, and = 0.4 is higher than that of = 0.15, = 0.15, and = 0.2; meanwhile, the curve of = 0.15, = 0.15, and = 0.2 is higher than that of = 0, = 0, and = 0. The reason is that when max is fixed, with the increase of , , and , the cluster competition radius decreases. Furthermore, the number of CHs decreases with the growth of max , which means the number of generated CHs is determined by max , , , and . When , , and are fixed, the increase of max leads to the increase of , correspondingly, the number of CHs will decrease. As shown in Figure 6, three curves all decline with the gradually increase of max . To prove the validity of our intercluster routing tree, in Scenario 2, we set = 0. tree, only referring to the energy of CHs, referring to both the energy of CHs and the number of CMs as well as only referring to the number of CMs. In these cases, we compare the network lifetime (we define the network lifetime as percentage node alive PNA [8]. That is, the network lifetime is defined as the time when 90 percent of nodes are still alive). As shown in Figure 7, when set = 0.5, the network lifetime is the maximal in three groups. Thus, we can obtain that the simulation results coincide with the theoretical analysis. In practical application, we can adjust to be an optimal value according to different network scenarios. The following is the stability analysis of IDUC. We run LEACH, HEED, EADC, ECDC, and IDUC in two scenarios. Figure 8 shows the distribution of CHs numbers in different scenarios, we can see that IDUC and ECDC can achieve more stable performance than other algorithms. Compare IDUC with ECDC, we found that the stability of ECDC is better than that of IDUC in Scenario 1. However; when considering Scenario 2, the stability of IDUC is better than that of ECDC. Network Lifetime. In EADUC, we proved that the network lifetime in heterogeneous scenarios is longer than that in homogeneous scenarios if the residual energy of nodes is taken into account when designing the competition radius of CHs. Since we also consider the residual energy of nodes, in our simulation, we only need to test the performance of IDUC in different scenarios where nodes are distributed uniformly and nonuniformly, respectively. Thus, we set = 0.3, = 0.3, = 0.4, = 0.5, and max = 160 m and run IDUC in these scenarios and then compare its network lifetime with EADUC and ECDC. From Figure 9, we can see that the network lifetime of IDUC in uniform scenario is slightly longer than EADUC. The reason is that, different from EADUC, IDUC applies to network scenarios with nonuniform nodes distribution. Thus, in Scenario 2 of Figure 9, we can see that the network lifetime of IDUC is obviously longer than that of EADUC and ECDC, since no matter in designing or in selecting the intercluster routing nodes, IDUC considers the nodes density, which can balance and reduce the energy consumption of CHs and thus extend the network lifetime. To further test the performance of IDUC, in heterogeneous network Scenario 2 where nodes are scattered unevenly, we run LEACH, EEUC, EADUC, ECDC, and IDUC. Results in the Figure 10 show that ECDC and IDUC perform far better than LEACH, EEUC, and EADUC in prolonging the network lifetime. Conclusion In this paper, an improved distributed unequal clustering protocol IDUC is proposed, we design a new cluster competition radius considering the distance between nodes and the BS, the residual energy of nodes, and the numbers of neighbor nodes within the node communication range. Furthermore, to bridge the gap between the numbers of nodes within the initial communication radius and final cluster radius, we design a new intercluster communication routing tree. Theoretical analysis and simulation show that, the protocol is suitable for various network scenarios. In these scenarios, the nodes energy can be efficiently balanced and the network lifetime can be extended significantly.
6,823.6
2014-06-23T00:00:00.000
[ "Computer Science", "Engineering" ]
Effect of defects on reaction of NiO surface with Pb-contained solution In order to understand the role of defects in chemical reactions, we used two types of samples, which are molecular beam epitaxy (MBE) grown NiO(001) film on Mg(001) substrate as the defect free NiO prototype and NiO grown on Ni(110) single crystal as the one with defects. In-situ observations for oxide-liquid interfacial structure and surface morphology were performed for both samples in water and Pb-contained solution using high-resolution X-ray reflectivity and atomic force microscopy. For the MBE grown NiO, no significant changes were detected in the high-resolution X-ray reflectivity data with monotonic increase in roughness. Meanwhile, in the case of native grown NiO on Ni(110), significant changes in both the morphology and atomistic structure at the interface were observed when immersed in water and Pb-contained solution. Our results provide simple and direct experimental evidence of the role of the defects in chemical reaction of oxide surfaces with both water and Pb-contained solution. Scientific RepoRts | 7:44805 | DOI: 10.1038/srep44805 At this point, we raise a fundamental question of "Can we design a model experiment to investigate the role of defects in the chemical reaction between the oxide surface and Pb contained solution? " We used molecular beam epitaxy (MBE) [24][25][26] as a tool to create defect-free NiO thin films on Mg(001) substrate, which can grow high-purity single crystal quality epitaxial films via operating in ultra-high vacuum (UHV) chamber and using high purities of the beam fluxes in the Center for Nanoscale Materials at Argonne National Laboratory. For the oxide films with defects, we used the process of growing polycrystalline NiO thin films on Ni(110) single crystals in UHV chamber reported elsewhere 27,28 . Using the samples with and without defects, we directly investigated the role of defects in Ni oxide thin films on their chemical reaction in water and Pb-contained solution. High resolution X-ray reflectivity was adopted to measure the interface structure between oxide and water or oxide and Pb-contained solution, and in-situ atomic force microscopy were used to investigate the morphology changes of oxide films in water and Pb-contained solution. Results and Discussion In order to explore the effect of defects in NiO layer on the chemical reaction between NiO layer and water as well as Pb contained solution, we prepared two types of NiO: one is as-grown NiO on Ni(110) substrates where we expect to have defects such as phase boundary, and the other is MBE grown epitaxial NiO on MgO(001), which is relatively free from such (defect-free NiO). The changes in interfacial structure and surface morphology in water and Pb-contained solution were investigated by high resolution X-ray reflectivity (HRXR) and in-situ atomic force microscopy (AFM). Figure 1 shows the measured X-ray reflectivity data from the as-grown NiO on single crystal Ni(110) surface in helium environment and in deionized water, respectively. For the measured reflectivity in helium environment, the reflectivity shows distinct beating patterns at lower q side corresponding to the existence of at least two layers with different densities. However, near the substrate (220) Bragg peak, the regular film fringes appeared along the substrate reflectivity. This is a strong evidence that one of these layers have epitaxial relationship with the substrate, which suggests the existence of a crystalline NiO layer. As the film Bragg peak shows only one epitaxial layer without complicated beating pattern, we think that the other layer is either amorphous or polycrystalline phase 29 . On the other hand, the reflectivity measured in deionized water from the identical surface shows completely different features in the intensity distribution compared to that measured in helium environment. First, the intensities at the mid-zone and higher momentum transfers (q) above the substrate (220) Bragg peak were hardly measurable while asymmetric intensity distribution before and after the Bragg peak were accompanied, which is similar to the effects of surface defects or defect clusters on the X-ray reflectivity intensities 30 . Second, the distinct film fringes with two layers and the epitaxial interference of the film Bragg peak disappeared completely. Two possibilities are suggested for these observations: one is that the as-grown NiO layer, regardless of its phase, interacts with water to be dissolved and the metal surface starts to form nickel hydroxide ad-layers whereas the other is that the intense X-ray generates reactive radicals in water nearby the oxide surface and causes secondary beam damage effects (e.g., continuous etching of the bare surface as well as oxide dissolution) 31,32 . We conducted an experiment to explore the X-ray beam damage effect, which will be discussed in the latter part of our discussion. It should be noted that we were not able to measure the X-ray reflectivity in Pb contained solution due to the disappearance of surface crystallinity when the sample was in contact with water. Figure 2 shows the X-ray reflectivity results of MBE grown nickel oxide on MgO substrate in helium, deionized water and 10 mM Pb-contained solution, respectively. The red boxes show the magnified images of the reflectivity close to the right shoulder of each Bragg peak. Commonly for all three cases, at the MgO(002) substrate Bragg peak, the film fringes at the left and right sides of the Bragg peak show slightly different features, of which difference is more prominent at MgO(004) Bragg peak. It results from d-spacing difference between MgO substrate and NiO films. And, at low q region, there is a phase shift and intensity reduction in the reflectivity of solution environment (deionized water or 10 mM Pb contained solution), due to the existence of liquid phase on the solid surface, relative to that of helium environment. As the modulus of liquid structure factor rapidly decreases with q 33 the effect of existence of water phase on the observed intensity distribution is appearing only within low q range. Whether the ordering of liquid phases on the solid film is "layered" or follows "error-function" is undistinguishable, however, due to the overwhelming film fringe signals. The difference in the reflectivity data between de-ionized water and 10 mM Pb-contained solution in their intensity distribution is even smaller so that they are hardly distinguishable. Although the experiment could be better tailored to possibly separate out the critical difference between two liquid phases, we interpret the result as the MBE-grown nickel oxide is relatively inert both in pure water and in Pb-contained solution at room temperature compared to that grown on Ni(110) substrate. Figure 3 shows the in-situ AFM images for as-grown NiO on Ni(110) and MBE grown NiO on MgO(001) in air, water and 10 mM Pb-contained solution for 15 hrs. For the MBE grown NiO, the surface morphology shows dense diamond shaped facets with root-mean-square (rms) roughness of 0.7 nm followed by round shapes in water and Pb-contained solution with monotonous increase in rms roughness to 1.4 nm and 1.9 nm, respectively. For as-grown NiO on Ni(110) substrate, in contrast, the morphology in air shows a flat surface with rms roughness of ≈ 0.60 nm. Large and small crater-shaped spots are randomly distributed on the surface. After exposure in water for 15 hrs, the surface morphology changed to square shape due to the surface reconstruction by oxygen in water. The rms roughness slightly increased to ≈ 0.88 nm. In 10 mM Pb-contained solution, the surface was covered by crystallite particles with 0.1~0.3 μ m size with drastic increase in rms roughness up to ≈ 17 nm. As evident from Fig. 3(c), the surface morphology underwent a significant change in Pb-contained solution. Figure 3(g) shows the roughness changes in air, water and 10 mM Pb-contained solution for both samples. For the MBE grown NiO, the roughness shows monotonous increase with change in environment from air to Pb-contained solution, while, in the case of as-grown NiO, the roughness shows drastic increase between water and Pb contained solution. In order to investigate the role of X-ray irradiation in the drastic decrease of the X-ray reflectivity intensity in water for as-grown NiO, we designed and performed continuous measurement of X-ray reflectivity of NiO immersed in water near the Bragg peak (L = 0.25 r.l.u.) with and without X-ray irradiation for a given period of time in an alternating manner as shown in Fig. 4(a). The red circles represent the measured intensity at each time, and the orange and blue lines represent the exposure in water with and without X-ray irradiation, respectively. Within 1 hr of immersion in water, the intensity decreases as a function of time even without X-ray irradiation and saturates after 60 mins. After 7 hrs immersion without X-ray, no significant intensity drop was observed before irradiation, while after exposure to X-ray, the intensity drastically started to drop and reached at a new saturation value. Within additional 1 hr, the intensity drop was caused probably by the chemical interaction between oxide and water until a certain saturation point. After saturation, the intensity was maintained whereas, after irradiation for 10 mins, the intensity started to drop and reached a new saturation level. At a glance on Fig. 4, it seems rather difficult to conclude whether X-ray takes a role for the chemical reaction or not. Before further analyzing our data, it is worth mentioning relevant research reports on the role of X-ray irradiation on the chemical reaction of the oxide layer with water. Synchrotron based X-ray beams can affect the charge, bond and orbital states of strongly correlated systems [34][35][36][37] . Furthermore, ionization of molecules by X-ray can lead to radiolysis and formation of highly reactive free radicals. These radicals may then react chemically with neighbouring materials even after the original radiation has stopped. However, most studies performed to explore the effect of ionizing radiation and corrosion in nuclear materials was related to the γ-radiation, which has higher energy than X-ray. For example, Daub et al. 38 revealed that the rate of carbon steel corrosion depends on the concentration of H 2 O 2 which results from water radiolysis, and OH − accelerates the corrosion process. Water radiolysis is the decomposition of water molecules due to the ionizing radiation, which creates • HO radical, H • atom, HO 2 • , H 3 O + , OH − , H 2 O 2 and H 2 39 . However, the X-ray energy that we used for our study is 17 keV, which is much lower than that used (~GeV) for the water radiolysis study. From this reasoning, we presumed that the beam damage of X-ray might be negligible. To validate our presumption of negligible X-ray beam damage effect, we reorganized Fig. 4(a) so that we could compare the difference between the X-ray reflectivity intensities with and without X-ray exposure as shown in Fig. 4(b). Figure 4(b) shows the average reflectivity plot with and without X-ray exposure. We performed two-sample t-test to determine whether two samples are likely to have come from same two underlying populations that have the same mean. The p-value of our t-test is 0.635, much larger than the threshold value of 0.05, indicating that there is no significant statistical difference between the mean values of X-ray reflectivity with and without X-ray exposure. Furthermore, the X-ray reflectivity for the MBE grown NiO sample remained constant for more than a day. Therefore, we confirmed that X-ray did not affect the chemical reaction on the surface in our system. As such, we can conclude that the different behaviors observed from two model systems, i.e., defect-free and defect-rich Ni oxide layers stem from the existence of the defects and neither from the bulk part of the oxide layer nor the external x-ray interacting with the water molecules. Indeed, the effect of defects on the chemical reaction between oxide film and water molecules has been reported by previous studies. Barbier et al. 40 suggested that the chemical reaction of NiO(111) in water mainly takes place at defects, and Kitakatsu et al. 41 claimed that the Ni(100) areas do not adsorb hydroxyl groups on regular sites but possibly on defect sites. Kofstad 42 reported a comprehensive study on the importance of lattice, grain boundary and dislocation for metal oxide, but its defect structure and transport properties are still subject to considerable discussion. Our results from X-ray reflectivity and AFM clearly show the difference in reaction with water between defective oxide and defect-free oxide. In case of the defect-free oxide, the oxide layer does not react with water and 10 mM Pb contained solution while defect-rich oxide underwent significant changes in morphology and interfacial structure. It clearly shows that the defect free oxide has stronger passivity than defect-rich oxide. Other possible mechanisms responsible for our findings include the role of polycrystalline and/or amorphous phases. The existence of amorphous phase may facilitate the chemical reaction between oxide layer and either water or Pb-contained solution. However, further investigation is needed to clarify the effects of these phases on the chemical reaction of nickel oxide layer. Conclusions In summary, we investigated the role of defects in Ni oxide thin films on their chemical reaction in water and Pb contained solution. We used MBE grown NiO on MgO(001) substrate as the defect free NiO prototype, and NiO grown on Ni(110) single crystal as the one with defects. For the MBE grown NiO, we observed that the surface morphology changes in water and Pb-contained solution with monotonic increase in roughness. However, no significant changes were detected in the high-resolution X-ray reflectivity data. Meanwhile, in the case of native grown NiO on Ni(110), significant changes in both the morphology and atomistic structure at the interface were observed when immersed in water and Pb-contained solution. Furthermore, the reflected intensity decreased with exposure time, implying that the chemical reaction of oxide layer initiates from the defect sites and continues until the oxide undergoes a full phase transition into nickel hydroxide phase. Methods Materials. Two types of samples were used in this study. One is native grown nickel oxide on Ni(110) single crystal, and the other is epitaxially grown (001)NiO on MgO(001) substrate by molecular beam epitaxy (MBE) method. The reason we used NiO(110) for polycrystalline sample was based on our findings that it contains grain boundaries and mixed amorphous and crystalline phases that are ideal for studying the effect of defects on the reaction of NiO surface with Pb-contained solution 29 . And the reason we chose NiO(001) as our model system was because the epitaxially grown NiO(001) is the most studied system as well as the easiest to achieve epitaxial state close to single crystal. Ni(110) single crystal (99.99%) with 10 mm diameter and 1.00 mm thickness was purchased from a commercial source (Princeton Scientific Corp.). The purchased sample surface was further mechanically polished with 0.03 μ m alumina colloidal solution (pH ~3.5) followed by an electro-polishing with current density of 24.8 mA/mm 2 and ~45 V for 50 s in mixture solution of 30% nitric acid -70% methanol cooled by dry ice. After the electro-polishing, the sample was sputter-cleaned by Ar + with 0.5 kV and 10 mA in 2.0 × 10 −5 torr vacuum (10 mins) followed by annealing at 700 °C in 1.0 × 10 −8 torr vacuum (5 mins) in an ultra-high vacuum (UHV) chamber at 33-ID-E beamline of Advanced Photon source (APS), Argonne National Laboratory. The sputtering and annealing procedure was repeated several times until an in-situ Reflection High-Energy Electron Diffraction (RHEED) pattern confirmed a homogenous single crystal surface. The cleaned surface was then exposed to high-purity oxygen gas in 4.0 × 10 −6 torr for 4 mins to grow a thin native nickel-oxide layer. The sample was kept in a sealed container with minimal contact with air to avoid continuous growth of uncontrolled oxide layers with time until the X-ray reflectivity measurement was performed in helium environment first and in water in sequence. The average lattice constant of NiO polycrystalline film was 4.184 Å, which was measured from the cross-section TEM images. The strain states were complex, but the total axial strain of + 3.2% corresponds to ~7 GPa of tensile stress in the surface normal direction and the total transverse strain was calculated to be − 1.0%. The film thickness was 2.0~2.5 nm 29 For a prototype of defect free nickel oxide, Ni(001) film was epitaxially grown on MgO(001) substrate (10 mm × 10 mm × 1 mm, one-side polished) by MBE. We estimate the out-of-plane and in-plane lattice constants to be 4.158Å and 4.213 Å 43 . The substrate was outgassed in UHV chamber up to 400 °C, and the nickel oxide was grown at 300 °C in 2.5 × 10 −6 Ozone environments up to 7.2 nm, which is the NiO(001) film thickness. The film was characterized by atomic force microscopy (AFM) and the rms roughness was 0.718 nm. High resolution X-ray reflectivity. The native NiO layer grown on Ni(110) single crystal substrate was measured in helium environment and in bulk water using a thin-film cell, respectively, at room temperature. High resolution specular X-ray reflectivity measurements were performed at beamline 5-IDC of APS. The HRXR measurements basically scan the (00 L) crystal truncation rod (CTR) intensities along the surface normal direction, where L is the reciprocal lattice unit. The incident X-ray energy was 17.463 keV with ~8.0 × 10 10 photons/sec beam flux. The reflected X-ray intensities were measured by Pilatus 100 K pixel array detector after 5 mm vertical and 5 mm horizontal slits. The background subtraction and the peak intensity integration were performed over ranges from L = 0.1 to 1.2 and d (220) = 1.246 Å. As qd = 2π L relation, L = 1.2 corresponds to q max = 6.0 Å −1 , which allows the real space features to be resolved with ~0.5 Å resolution. For the MBE grown nickel oxide, we moved beamline from 5ID-C to 33BM-C due to the beam schedule. The range of L is from 0.1 to 2.2 and d (002) = 2.072 Å. Thus, L = 2.2 corresponds to q max = 6.7 Å −1 . The incident beam energy was 17.864 keV with ~6.0 × 10 8 photons/sec. High-resolution x-ray reflectivity was measured in helium, water, and 10 mM Pb-solution with varying pH conditions (3, 7 and 11). In-situ atomic force microscopy (AFM). In order to investigate the surface morphology changes in air, water, and Pb-contained solution, the film surfaces were characterized by in-situ AFM using ac mode at Oak Ridge National Laboratory. The scan size was 1 μ m × 1 μ m and the scan rate was 0.8~1.2 Hz. All images were taken using ultra-high frequency AFM probes with arrow shaped tip at the very end of cantilever coated by reflex aluminum.
4,331.6
2017-03-20T00:00:00.000
[ "Materials Science" ]
Discovery and Diagnosis of a New Sobemovirus Infecting Cyperus esculentus Showing Leaf Yellow Mosaic and Dwarfism Using Small-RNA High Throughput Sequencing C. esculentus is a profitable crop in Valencia, Spain, but the emergence of a disease causing of leaf yellow mosaic, dwarfism, and a drastic decrease in tuber production has become a problem. The small-RNA high-throughput sequencing (HTS) of a diseased C. esculentus plant identified only one virus, which could be the causal agent of this disease. The amino-acid comparison with viral sequences from GenBank and phylogenetic analyses indicated that this was a new species of genus Sobemovirus, and the name Xufa yellow dwarf virus was proposed. Completion with Sanger sequencing yielded a contig of 3072 nt corresponding to about 75% of the typical genome of sobemoviruses, including ORFs 2a (polyprotein-containing protease, VPG, and other proteins), 2b (RNA-dependent RNA polymerase), and 3 (coat protein). The nucleotide sequence was used to develop fast and accurate methods for the detection and quantification of xufa yellow dwarf virus (XYDV) based on reverse transcription (RT) and DNA amplification. XYDV was detected in leaves and tubers and showed a high incidence in the field in both symptomatic (almost 100%) and asymptomatic (70%) plants, but its accumulation was much higher in symptomatic plants. The relevance of these results for disease control was discussed. Introduction Cyperus esculentus, known as tiger nut, chufa or xufa, yellow nutsedge, iron water chestnut, underground chestnut, underground walnut, earth almond, ginseng fruit, ginseng bean, etc., is considered an invasive weed in most tropical, subtropical, and Mediterranean regions. However, C. esculentus is cultivated in a few places, since it produces edible sweet nut-like tubers, which are used mainly as a side dish in some Western African countries and as fodder for animals in some American countries [1]. In Europe, C. esculentus is cultivated mainly in a small region of about 400 ha in Valencia, Spain, and is used to elaborate a sweet milky beverage named horchata that is served cold in summer as a natural refreshment. The elaboration and commercialization of horchata represent a very profitable industry in Valencia, producing about 50 million liters per year that are prepared from about 6000 tons of dried tubers, where half of them is produced in Valencia and the other half is imported from Western Africa. Since 2015, farmers have found diseased plants showing leaves with a yellowed mosaic pattern of parallel discontinuous streaks, dwarfism in the aerial parts and roots, and a severe reduction in tuber production ( Figure 1). Figure 1. Symptoms found in some C. esculentus plants: leaves with yellowing or chlorosis following a mosaic pattern in streaks and stunting in the aerial parts and roots with lower tuber production. The first step to eradicate or mitigate the disease is the identification of the causal agent. The symptoms of leaf yellowing and plant stunting present in these C. esculentus plants are typical of viral infections. The high-throughput sequencing (HTS) of viral small RNAs (produced by the plant defense mechanism based on RNA silencing) is a powerful tool generating in a short time thousands of unbiased sequences corresponding to all viruses present in the sample [2,3]. HTS does not require previous knowledge of the targeted organisms, and it is especially suited to discovering new viruses. The obtained nucleotide sequences can be ascribed to different taxonomic levels, such as virus species, genera, o families, according to the nucleotide or amino-acid identity with known viral sequences from the GenBank database or the presence of sequence motifs [4]. However, HTS is too expensive for routine diagnosis, and it is necessary to develop rapid and accurate detection techniques for each virus to study virus incidence and epidemiology and apply disease-control measures such as certification and eradication. The techniques based on reverse transcription (RT) and DNA amplification are the fastest to be developed and can be designed from the sequences obtained with HTS. Polymerase chain reaction (PCR) is the most widely used, enabling fast, highly specific, and sensitive diagnosis, but real-time quantitative PCR (qPCR) with SYBR Green or TaqMan probes is more sensitive and can quantify the virus titer, which is very useful to evaluate diseasecontrol measures such as the plant breeding of resistant cultivars and cross protection [4]. Loop-mediated isothermal amplification (LAMP) is a specific and sensitive technique with the advantage of not requiring expensive thermal-cycling instruments, being less sensitive to inhibitors, unlike PCR and qPCR, and having a great potential to be used directly on field [5]. In this work, a C. esculentus plant with leaf mosaic yellowing and dwarfism was analyzed using small-RNA HTS, showing that only one virus was present. The amino-acid sequence comparison and phylogenetic analyses showed that it was a new species of genus Sobemovirus that was proposed to be named Xufa yellow dwarf virus. The obtained nucleotide sequence was used to develop RT-PCR, RT-PCR with SYBR and a TaqMan probe, and RT-LAMP for the fast and specific detection and quantification of xufa yellow dwarf virus (XYDV). These detection methods were applied to detect XYDV in leaves and tubers, evaluate its incidence on field and estimate XYDV accumulation in plants. The first step to eradicate or mitigate the disease is the identification of the causal agent. The symptoms of leaf yellowing and plant stunting present in these C. esculentus plants are typical of viral infections. The high-throughput sequencing (HTS) of viral small RNAs (produced by the plant defense mechanism based on RNA silencing) is a powerful tool generating in a short time thousands of unbiased sequences corresponding to all viruses present in the sample [2,3]. HTS does not require previous knowledge of the targeted organisms, and it is especially suited to discovering new viruses. The obtained nucleotide sequences can be ascribed to different taxonomic levels, such as virus species, genera, o families, according to the nucleotide or amino-acid identity with known viral sequences from the GenBank database or the presence of sequence motifs [4]. However, HTS is too expensive for routine diagnosis, and it is necessary to develop rapid and accurate detection techniques for each virus to study virus incidence and epidemiology and apply disease-control measures such as certification and eradication. The techniques based on reverse transcription (RT) and DNA amplification are the fastest to be developed and can be designed from the sequences obtained with HTS. Polymerase chain reaction (PCR) is the most widely used, enabling fast, highly specific, and sensitive diagnosis, but real-time quantitative PCR (qPCR) with SYBR Green or TaqMan probes is more sensitive and can quantify the virus titer, which is very useful to evaluate diseasecontrol measures such as the plant breeding of resistant cultivars and cross protection [4]. Loop-mediated isothermal amplification (LAMP) is a specific and sensitive technique with the advantage of not requiring expensive thermal-cycling instruments, being less sensitive to inhibitors, unlike PCR and qPCR, and having a great potential to be used directly on field [5]. In this work, a C. esculentus plant with leaf mosaic yellowing and dwarfism was analyzed using small-RNA HTS, showing that only one virus was present. The amino-acid sequence comparison and phylogenetic analyses showed that it was a new species of genus Sobemovirus that was proposed to be named Xufa yellow dwarf virus. The obtained nucleotide sequence was used to develop RT-PCR, RT-PCR with SYBR and a TaqMan probe, and RT-LAMP for the fast and specific detection and quantification of xufa yellow dwarf virus (XYDV). These detection methods were applied to detect XYDV in leaves and tubers, evaluate its incidence on field and estimate XYDV accumulation in plants. Identification of a New Sobemovirus The HTS of RNA extracts from a C. esculentus diseased plant yielded 622 contigs, but only 15 corresponded to viruses or viroids, and the rest were from the host. The Blastn analyses showed a 59 nt contig with a 92% identity with Hop stunt viroid (HSVd). However, the real-time RT-qPCR analysis with specific HSVd primers [6] was negative for this plant and other plants with the same symptoms. In addition, the 59 nt contig was formed by small RNAs with low reads and no similarity with HSVd, indicating that HSVd was not present in these plants and that this contig was an assembly artifact. The Blastx analyses showed 14 contigs with amino-acid similarity with viruses of genus Sobemovirus, such as rottboelia yellow mottle virus (RYMoV) and ryegrass mottle virus (RGMoV). To fill the gaps between contigs and confirm the sequences obtained with HTS, primers were designed ( Table 1) and used for RT-PCR and Sanger sequencing. The sequence was extended by 105 and 91 nt towards the 5 and 3 ends, respectively, with a 5 /3 RACE system. When all sequences were assembled, a single contig of 3072 nt was obtained; this corresponded to the genome of a virus with amino-acid similarity with members of genus Sobemovirus (Figure 2a). This virus was named Xufa yellow dwarf virus based on C. esculentus' name in the local language, "xufa" (Valencian/Catalan), and the symptoms. No other viruses or viroids were identified. Table 1. Oligonucleotides (primers and TaqMan probe) designed and used in this work. Use Oligo Sequence (Table 2) and XYDV. GenBank accession numbers and significant bootstrap values are indicated. included three ORFs (Figure 2a). ORF 2a (about 90% sequenced) encodes polyprotein P2a containing a transmembrane-anchoring domain, a serine protease, a VPg, and other products called P10 and P8. ORF 2b (about 94% sequenced) encodes an RNA polymerase RNA dependent (RdRp) expressed as a fusion protein via a −1 programmed ribosomal frameshift (−1PRF) with respect to ORF 2a. The XYDV sequence contains the −1 PRF signal consisting of a slippery sequence, 5 -UUUAAAC-3 , followed by a stem-loop structure. ORF 3 (about 75% sequenced) encodes the coat protein (CP) expressed by a subgenomic RNA. This nucleotide sequence was deposited in GenBank under accession number ON828429. RT-PCR and Sanger The comparison of the XYDV sequence with those of the members of genus Sobemovirus showed amino-acid identities ranging from 10.8 to 63.2% ( Table 2). The most similar viruses to XYDV were RYMoV, RGMoV, and artemisia virus A (ArtVA). The three ORFs showed different levels of variability, with ORF 2b (RdRp) being the most conserved and ORF 3 (CP) the most variable. The phylogenetic relationships were similar for the three ORFs, so only ORF 3 (CP) is shown in Figure 2b. XYDV formed a statistically supported clade with RGMoV, RyMoV, and ArtVA, confirming that XYDV is a member of genus Sobemovirus, and the low amino-acid identity between these viruses indicates that XYDV is a different species of this genus. Development of Methods for Fast Detection and Quantification of XYDV Based on the obtained nucleotide sequence, primers and a TaqMan probe (Table 1). were designed for techniques based on reverse transcription (RT) and DNA amplification: polymerase chain reaction (PCR), real-time quantitative PCR (qPCR) with SYBR Green, qPCR with a TaqMan probe, and loop-mediated isothermal amplification (LAMP). These techniques detected XYDV in the leaves and tubers of C. esculentus plants and were negative for the negative controls, which were water, RNA and crude extracts from healthy wild C. esculentus plants and Nicotiana benthamiana plants and serial dilutions from these extracts (data not shown). The sensitivity of these techniques was compared using serial ten-fold dilutions of 10 ng/µL total-RNA extracts from an XYDV-infected plant (Figure 3a). RT-PCR and RT-LAMP detected XYDV until the fourth dilution, corresponding to 1 pg, whereas RT-qPCR with SYBR or the TaqMan probe detected it in the fifth dilution, corresponding to 100 fg. The Ct values were similar for both RT-qPCR techniques, being slightly lower for RT-qPCR with the TaqMan probe ( Figure 3a). No amplifications were obtained from the negative controls. To find out if the RNA-extraction step could be avoided, these amplification methods were assayed with concentrated and ten-fold serial dilutions of crude plant extracts (only ground with buffer) from two XYDV isolates: CM1, obtained from a plant with severe symptoms, and CMN1, from an asymptomatic plant (hypothesized having lower viral titer; see below). RT-PCR detected XYDV in the CM1 crude-extract concentrate and the two subsequent 10-fold dilutions but not in any of the CMN1 crude extracts, despite being positive for the RNA extracts (Figure 3b). The intensity of the RT-PCR bands for CM1 was similar for the crude extract and the three serial dilutions, suggesting that the viral RNA titer at the lowest dilution was enough to attain maximum amplification. In addition, the concentrated crude extracts did not contain enough inhibitors to interfere with RT-PCR. RT-LAMP detected XYDV in the CM1 concentrate and the two dilutions, as well as the CMN1 concentrate. Finally, RT-qPCR with SYBR Green and the TaqMan probe detected XYDV in CM1 crude extracts until dilution 1/1000 and CMN1 until dilution 1/10 ( Figure 3b). Curiously, the Cts of the CM1 concentrate were higher than those of dilutions 1/10 and 1/100, suggesting the presence of inhibitors. The sensitivity of these techniques was compared using serial ten-fold dilutions of 10 ng/µ L total-RNA extracts from an XYDV-infected plant (Figure 3a). RT-PCR and RT-LAMP detected XYDV until the fourth dilution, corresponding to 1 pg, whereas RT-qPCR with SYBR or the TaqMan probe detected it in the fifth dilution, corresponding to 100 fg. The Ct values were similar for both RT-qPCR techniques, being slightly lower for RT-qPCR with the TaqMan probe (Figure 3a). No amplifications were obtained from the negative controls. Incidence of XYDV in the Field and Viral Accumulation Leaf samples were collected from 40 symptomatic and 20 asymptomatic C. esculentus plants from eight plots of the five most productive municipalities of Horta Nord comarca (county), Valencia province, Spain, in 2019. Crude extracts were analyzed with RT-LAMP, and RNA extracts were analyzed with RT-qPCR using the TaqMan probe and with RT-LAMP (Figure 4a). The RT-qPCR analysis of RNA extracts detected XYDV in 39 symptomatic plants (97.5 %), whereas RT-LAMP was a little less sensitive and detected XYDV in 37 or 38 (92.5 or 95.5%) symptomatic-plant RNA or crude extracts, respectively. Curiously, XYDV was also detected in a high proportion of asymptomatic plants (70.0% with RT-qPCR). The difference in XYDV incidence between symptomatic and asymptomatic plants was not statistically significant. XYDV was also detected in the tubers from five symptomatic plants with RT-LAMP and RT-qPCR. To find out if the RNA-extraction step could be avoided, these amplification methods were assayed with concentrated and ten-fold serial dilutions of crude plant extracts (only ground with buffer) from two XYDV isolates: CM1, obtained from a plant with severe symptoms, and CMN1, from an asymptomatic plant (hypothesized having lower viral titer; see below). RT-PCR detected XYDV in the CM1 crude-extract concentrate and the two subsequent 10-fold dilutions but not in any of the CMN1 crude extracts, despite being positive for the RNA extracts (Figure 3b). The intensity of the RT-PCR bands for CM1 was similar for the crude extract and the three serial dilutions, suggesting that the viral RNA titer at the lowest dilution was enough to attain maximum amplification. In addition, the concentrated crude extracts did not contain enough inhibitors to interfere with RT-PCR. RT-LAMP detected XYDV in the CM1 concentrate and the two dilutions, as well as the CMN1 concentrate. Finally, RT-qPCR with SYBR Green and the TaqMan probe detected XYDV in CM1 crude extracts until dilution 1/1000 and CMN1 until dilution 1/10 ( Figure 3b). Curiously, the Cts of the CM1 concentrate were higher than those of dilutions 1/10 and 1/100, suggesting the presence of inhibitors. Incidence of XYDV in the Field and Viral Accumulation Leaf samples were collected from 40 symptomatic and 20 asymptomatic C. esculentus plants from eight plots of the five most productive municipalities of Horta Nord comarca (county), Valencia province, Spain, in 2019. Crude extracts were analyzed with RT-LAMP, and RNA extracts were analyzed with RT-qPCR using the TaqMan probe and with RT-LAMP (Figure 4a). The RT-qPCR analysis of RNA extracts detected XYDV in 39 symptomatic plants (97.5 %), whereas RT-LAMP was a little less sensitive and detected XYDV in 37 or 38 (92.5 or 95.5%) symptomatic-plant RNA or crude extracts, respectively. Curiously, XYDV was also detected in a high proportion of asymptomatic plants (70.0% with RT-qPCR). The difference in XYDV incidence between symptomatic and asymptomatic plants was not statistically significant. XYDV was also detected in the tubers from five symptomatic plants with RT-LAMP and RT-qPCR. The RT-qPCR analyses showed a correlation between XYDV accumulation in leaves and symptom manifestation, since the XYDV titer was about 100 times higher in symptomatic (4.8 ± 0.2) than in asymptomatic plants (2.2 ± 0.4), and this difference was statistically The RT-qPCR analyses showed a correlation between XYDV accumulation in leaves and symptom manifestation, since the XYDV titer was about 100 times higher in symptomatic (4.8 ± 0.2) than in asymptomatic plants (2.2 ± 0.4), and this difference was statistically significant (Figure 4b). The accumulation of XYDV in tubers from the five symptomatic Discussion The cultivation of C. esculentus and the elaboration of horchata have become substantial revenue sources for the county of Horta Nord in Valencia, Spain. However, some diseases have risen in the last decades, such as tuber rot [8] and leaf necrosis [9] produced by fungi and the black spot of unknown etiology [10]. These diseases have caused severe economic losses and threatened tuber production. This work described the first viral disease detected in C. esculentus in Spain. Small-RNA HTS enabled the identification of a new virus of genus Sobemovirus, and the name Xufa yellow dwarf virus was proposed. To our knowledge, only four viruses infecting C. esculentus have been reported: turnip mosaic virus (TuMV) in Zimbabwe [11]; impatiens necrotic spot virus (INSV) in Georgia, USA [12]; brome streak mosaic virus (BrSMV) in Hungary [13]; and rice yellow mottle virus (RYMV) in Nigeria [14]. The low number of viruses described worldwide is probably due to having been poorly investigated, since C. esculentus is not widely used in agriculture. The emergence of viral diseases in different crops in Valencia and other Mediterranean areas is frequent, mainly caused by intensive agriculture, climate change, and the global movement of plant material [15]. The detection of xufa yellow dwarf virus (XYDV) in tubers could be important, since tubers are used as seeds and could be a means of transmission, as reported for some sobemoviruses [16]. Thus, XYDV could have been introduced in Valencia by the importation of C. esculentus tubers from Western Africa. Tubers are imported without a phytosanitary passport, as they can be only used for human or animal consumption. However, some farmers cultivate African tubers despite this being forbidden, with a high risk of dispersing pathogens and pests. Curiously, another sobemovirus, RYMV, has been also detected in C. esculentus in Western Africa [14]. Knowing the transmission means of viruses is key to applying prophylactic measures for disease control, e.g., the certification and control of vectors. Transmission assays and/or epidemiological studies are necessary to know if XYDV is transmitted by tubers, mechanical wounding, and/or insects as other sobemoviruses [16]. In case XYDV is tuber-transmitted, in addition to the sanitary certification of imported tubers, other control measures should be tested, such as the selection of XYDV-free tubers, chemical and/or thermal disinfection, or in vitro culture [4,10]. The molecular techniques developed here are very sensitive and can be used to test the effectiveness of removing XYDV from tubers. RT-LAMP could be used by farmers in situ; since it only requires a thermal bath, it can be directly used with plant crude extracts, and the amplification can also be visualized by means of naked-eye inspection using turbidity-changing ion indicators or color-changing fluorescent molecules [5]. These techniques were applied in the survey of a field, and XYDV was detected in almost all C. esculentus plants with leaf yellow mosaic and dwarfism and in 70% of the asymptomatic plants. The presence of XYDV in asymptomatic plants and the high overall incidence of this virus do not make the use of eradication strategies such as roguing feasible [4]. In this scenario, the most plausible disease-control strategy would be using virus-free tubers as seeds. Farmers could test the presence of XYDV in these tubers with RT-LAMP in each sowing season and evaluate the incidence in the field. However, as mentioned above, it is necessary to know if XYDV can be transmitted by other means, which would require additional measures. The RT-qPCR analyses showed that the XYDV titer was significantly much higher in symptomatic than in asymptomatic plants. This suggests that XYDV could interfere with root development and plant growth in seedlings when XYCV concentration surpasses a certain threshold. A positive correlation between symptoms and viral accumulation has been also reported for several plant viruses [17][18][19], although this does not apply to tolerant hosts [20]. The correlation among XYDV accumulation and symptom manifestation, as well as the fact that XYDV was the only viral sequence found using HTS Plants 2022, 11, 2002 8 of 11 strongly supports that XYDV is the causal agent of the disease of leaf yellow mosaic and dwarfism in C. esculentus. Plant Material Leaf samples and tubers were collected from 40 symptomatic and 20 asymptomatic C. esculentus plants from eight plots in the municipalities of Alboraya, Burjassot, Godella, Massalfasar, Moncada, and Vinalesa, which belong to the comarca (county) of Horta Nord, Valencia province, Spain. Extract Preparation About 200 mg of leaf tissue or tuber was ground with liquid nitrogen in microtubes containing glass beads in a TissueLyser power homogenizer (Qiagen, Hilden, Germany). Total-RNA extracts were obtained with Spectrum Plant Total RNA Kit (Sigma-Aldrich, San Luis, MO, USA). Crude extracts were prepared by means of the resuspension of ground plant tissue in 500 µL of STE buffer (100 mM NaCl, 10 mM Tris-HCl,1 mM EDTA (pH 7.5)) and homogenization with a vortex mixer for 2 min. RNA and crude extracts were kept at −80 • C until use. Sequencing For the HTS of small RNAs, total-RNA concentration and purity were determined with the Qubit ® RNA assay kit in a Qubit ® 3.0 fluorometer (Thermo Fisher Scientific, Waltham, MA, USA) and a NanoPhotometer ® spectrophotometer (Implen, Westlake Village, CA, USA), respectively. RNA integrity was determined in an Agilent Bioanalyzer 2100 system with the RNA Nano 6000 assay kit (Agilent Technologies, Santa Clara, CA, USA). cDNA was obtained from 1 µg of total-RNA with NEBNext ® Multiplex Small RNA Library Prep Set for Illumina ® (Sigma-Aldrich, San Luis, MO, USA) and sequenced in the Illumina NextSeq550 platform (Illumina, San Diego, CA, USA). Sequencing adapters were trimmed from the reads, and low-quality reads were filtered in November 2019 using SeqTrimNext V2.0.67 software (https://github.com/dariogf/SeqtrimNext, accessed on 30 November 2019) by applying the standard parameters for Illumina short reads [21]. VirusDetect V 1.7 [22] and the virus nucleotide database (http://bioinfo.bti.cornell.edu/ftp/program/VirusDetect/ virus_database/v239, accessed on 30 November 2019) were used for virus identification. Reads were aligned with the virus nucleotide database with the Burrows-Wheler Aligner program [23] and assembled with Velvet V.1.2.09 software [24]. Reads not aligning with viral sequences were recovered, assembled, and compared to the GenBank database with Blastn and Blastx [25] to find similar viral nucleotide and/or protein sequences. Sanger sequencing was performed on RT-PCR products purified using QIAquick PCR Purification Kit (Qiagen) using a Big Dye Terminator V. 3.0 Cycle Sequencing kit (Thermo Fisher Scientific, Waltham, MA, USA) in an ABI 3130 XL capillary sequencer (Applied Biosystems, Thermo Fisher Scientific, Waltham, MA, USA). Afterwards, 5 /3 RACE Kit, 2nd Generation (Roche, Basilea, Switzerland), was used to determine the 5 and 3 terminal sequences. Amino-Acid Identity and Phylogenetic Analyses The nucleotide sequences of ORFs 2a, 2b, and 3 of XYDV were translated to amino acids (https://web.expasy.org/translate, accessed on 30 November 2019), and the equivalent sequences of members of genus Sobemovirus were retrieved from GenBank (Table 2). These sequences were aligned with the CLUSTALW algorithm [26] implemented in MEGA X software [27]. Amino-acid identities were estimated with formula 100 × (1 − pdistance) in MEGA X. The phylogenetic relationships of the three ORFs were inferred with the Maximum Likelihood method [28] using the LG + F + G model, which is based on the LG matrix of amino-acid substitution [29] with empirical amino-acid frequencies and five discrete gamma rates. The statistical significance of the internal nodes was estimated with a bootstrap analysis with 100 replicates [30]. All these analyses were carried out in MEGA X. Primer and Probe Design The nucleotide sequence obtained for XYDV was used to design primers for RT-PCR and RT-qPCR and a TaqMan probe with Primer3 [31] and Primer Express (Thermo Fisher Scientific, Waltham, MA, USA). Primers for RT-LAMP were designed with LAMP Designer 1.16 (Premier Biosoft, Palo Alto, CA, USA). The list of the designed primers is shown in Table 1. RT and DNA Amplification RT was performed by denaturing total RNA or plant crude extracts mixed with 5 µM of random hexamers by heating at 65 • C for 5 min and incubating on ice for 1 min; then, they were added to a 20 µL reaction mixture containing 5 mM DTT, 1 mM of each dNTP, first-strand buffer, 20 units of RNaseOUT™ RNase Inhibitor (Invitrogen), and 100 units of SuperScript IV Reverse Transcriptase (Invitrogen, Thermo Fisher Scientific, Waltham, MA, USA). The reaction mixture was incubated at 23 • C for 10 min, at 50 • C for 10 min, and at 80 • C for 10 min to inactivate the reaction. PCR was performed in a 20 µL reaction mixture containing PCR buffer, 1.5 mM MgCl 2 , 1 mM of each dNTP, 1 µM of primers X1F and X1R (Table 1), and 0.5 U of Taq DNA polymerase (Invitrogen, Thermo Fisher Scientific, Waltham, MA, USA). Eppendorf ® Mastercycler EP Gradient was used under the following thermocycling conditions: denaturation at 94 • C for 2 min; 40 cycles at 94 • C for 30 s, at 52 • C for 1 min, and at 72 • C for 45 s; and an extension step at 72 • C for 5 min. PCR products were analyzed by means of electrophoresis in 2% agarose gels and visualized with Gel Red (Biotium. Fremont, CA, USA) under UV light. For qPCR, total-RNA concentration was measured with a NanoDrop™ 1000 spectrophotometer and adjusted to 10 ng/µL to normalize the different extractions. Real-time qPCR with SYBR Green was carried out in LightCycler ® 480 (Roche Molecular Systems Inc.) in a 20 µL mixture containing TB Green Premix Ex Taq II (Tli RNase H Plus) (Takara), 0.2 µM of primers qX1F and qX1R (Table 1), and 1/10 of cDNA from the RT reaction. The thermocycling conditions included denaturation at 95 • C for 30 s and 35 cycles of 95 • C for 5 s and at 60 • C for 30 s. The melting-curve analysis of the PCR products showed only one peak, indicating the absence of nonspecific products and/or primer-dimers. One-step realtime RT-qPCR with a TaqMan probe was performed in LightCycler ® 480 (Roche Molecular Systems, Pleasanton, CA, USA) in a 20 µL mixture prepared with One Step PrimeScript RT-PCR Kit (TaKaRa, Shiga, Japan ) and containing 0.2 µM of primers qX2F and X2R and 0.4 µM TaqMan probe ( Table 1). The thermocycling conditions consisted of RT at 42 • C for 25 min, followed by denaturation at 95 • C for 15 s and 35 cycles of 95 • C for 5 s and at 60 • C for 20 s. For the relative quantification of XYDV, a standard curve was obtained with the RT-qPCR of the sample with the lowest Ct and 10-fold serial dilutions assigning arbitrary values starting from seven. The standard curve showed a high determination coefficient (R 2 = 0.9994), indicating a strong linear relationship, and the linear regression formula was y = −3.642x + 27.02. One-step RT-LAMP was performed in a 25 µL mixture containing a sample (RNA of crude extracts) previously denatured at 95 • C for 5 min, Isotermal Amplification Buffer, 1.4 mM dNTPS, 5 U WarmStar RT (New England Biolabs, Ipswich, MA, USA), 8 U Bst DNA polymerase (New England Biolabs, Ipswich, MA, USA), and the three XYDV-specific primer pairs: 0.2 µ XF3 and XB3, 1.6 µM XFIP and XBIP, and 0.4 µM XLoopF and XLoopR ( Table 1). The mixture was incubated at 68 • C for 1 h in a water bath and heated to 80 • C for 10 min to stop the reaction. The amplification products were analyzed by means of electrophoresis in 2% agarose gels. Statistical Analysis Symptomatic and asymptomatic plants were compared for XYDV incidence and accumulation with chi-squared and unpaired two-tailed Student's t-tests, respectively. The analyses were performed with Statgraphics plus Software Version 5.1 using an alpha level of 0.05. Data Availability Statement: The nucleotide sequence of the virus with the proposed name xufa yellow dwarf virus generated in this work was deposited in GenBank under accession number ON828429. All the sequencing-output datasets generated in the study are freely available upon request to the corresponding author.
6,714.4
2022-07-31T00:00:00.000
[ "Biology", "Agricultural And Food Sciences" ]
Poly(Lactic Acid)-Based Microparticles for Drug Delivery Applications: An Overview of Recent Advances The sustained release of pharmaceutical substances remains the most convenient way of drug delivery. Hence, a great variety of reports can be traced in the open literature associated with drug delivery systems (DDS). Specifically, the use of microparticle systems has received special attention during the past two decades. Polymeric microparticles (MPs) are acknowledged as very prevalent carriers toward an enhanced bio-distribution and bioavailability of both hydrophilic and lipophilic drug substances. Poly(lactic acid) (PLA), poly(lactic-co-glycolic acid) (PLGA), and their copolymers are among the most frequently used biodegradable polymers for encapsulated drugs. This review describes the current state-of-the-art research in the study of poly(lactic acid)/poly(lactic-co-glycolic acid) microparticles and PLA-copolymers with other aliphatic acids as drug delivery devices for increasing the efficiency of drug delivery, enhancing the release profile, and drug targeting of active pharmaceutical ingredients (API). Potential advances in generics and the constant discovery of therapeutic peptides will hopefully promote the success of microsphere technology. Introduction Advances in pharmaceutical and other associated fields demand site-specific delivery of drugs, vaccines, genes, and many other biomolecules. Furthermore, for the effective treatment of a disease, the stability and safety issues related to these agents may be challenging during the development and storage of advanced marketed products [1,2]. New drug delivery formulations and their applications have been studied for a long time, with the current focus being on microparticle systems and their benefits. The definition of the term "microparticle" is a spherical particle with a size from 1 µm to 2 mm containing a core substance enclosed by one or more membranes or shells. Microparticles may be further classified as microspheres and microcapsules based on their internal structure. Microspheres are generally formed by a homogeneous matrix in which it is not possible to separate a core and a membrane, while the API is dispersed in the polymer matrix either as small clusters or molecularly. Microcapsules are formulations constituted by a central liquid, solid, or semisolid core containing the API, alone, or in combination with excipients, surrounded by a membrane or a continuous polymer coating [3]. Stretching the definition of a microcapsule, we can include not only membrane enclosed particles or droplets but also solid matrix dispersions without external membranes or wall structures. In order to select the appropriate material for the capsule wall or the matrix, characteristics such as film formation, hydrophilicity and hydrophobicity, release profile, and degradation curve must be considered [4]. As a drug delivery system, MPs overcome the disadvantages of the traditional dosage forms and offer many advantages, such as the use of different administration routes or the opportunity of encapsulating various molecules, reduced toxicity, effective and accurate control over long periods, as well as the protection of the encapsulated agent against oxidation and the ease of administration to people that are unable to provide for themselves, such as children and people with special needs. MPs can also be used for the controlled release of drugs that can be adapted by the choice of polymer and its chemical and molecular features, such as molecular weight (Mw), monomer composition, crystallinity, glass transition temperature (Tg), and inherent viscosity [1,5]. These systems consist of a biocompatible polymeric material that allows for control over drug release and protection of the drug cargo [6]. Furthermore, besides drugs, microparticles are ideal candidates for peptide and protein administration. The high molecular weight and polar nature of peptides and proteins results in low membrane permeability, while the structural sensitivity of proteins constitutes their incompatible with the gastrointestinal tract. Because of this, peptides and proteins are usually delivered as injectable formulations. While humanized antibodies may be long circulating in the bloodstream, peptide therapeutics can be cleared from the organism in a matter of minutes either due to enzymatic degradation or renal clearance. Thus, an effective delivery and prolonged release of biologics demands the encapsulation of these agents into therapeutic MPs [7]. A MP depot system is suitable when the following requirements are being achieved: (1) the stability of the encapsulated active ingredient is maintained; (2) an optimal drug loading is obtained; (3) a high encapsulation efficiency and yield is accomplished; (4) desired drug release profiles with low initial release are attained; (5) particles of free-flowing and good syringeability are produced; (6) and a simple, scalable, and reproducible process is established. Some features, such as morphological characteristics, particle size, and polydispersity index (PDI), are essential to ensure stability, encapsulation, and a sufficient release of the drug, and they are directly related to the preparation technique of the MPs. Thus, preparation techniques are selected based on the drug in question and according to the specific application of the microparticle formulation [5]. A brief discussion on the most common techniques is provided below. Recent advancements in MP formulations suggest the use of biodegradable polymers for the effective delivery of proteins, peptides, and other biomolecules over inorganic materials due to potential toxicity on the human body and adverse effects on the environment [3]. Furthermore, polymers enable modification in (a) physicochemical properties (e.g., hydrophobicity, zeta potential), (b) drug release profiles (e.g., delayed, prolonged, triggered), and (c) biological properties (e.g., bioadhesion, improved cellular uptake) of the MPs [8]. Biobased and biodegradable polymers, such as poly(lactic acid) PLA, poly(glycolic acid) (PGA), poly(lactic-co-glycolic acid) (PLGA), poly(hydroxy alkanoates) (PHA), poly(ε-caprolactone) (PCL), and their copolymers, are considered ideal materials for microencapsulated drug delivery systems. They are easily absorbable and can be naturally decomposed through enzymatic or nonenzymatic processes that result in biocompatible and toxicologically safe for humans byproducts, which are then eliminated by the normal metabolic pathways [3]. In addition, they exhibit interesting physical properties and various erosion times, while the fact that they have been approved by the Food and Drug Administration (FDA) has generally facilitated their use as drug delivery systems and biomaterials [9]. Polymeric microparticles are also starting to be used in the field of vaccination. MPs can overcome the obstacles derived from the use of needles and are more patient-friendly. A promising painless and controlled system for drug delivery is microneedle patches. Those usually contain micro-particles able to penetrate skin [10]. Among the above-mentioned biopolymers, the advantages offered by PLA have led researchers to broadly implement its usage in many applications, including drug delivery systems. The environmental friendliness, the ease of production, the recyclability, compostability, and biocompatibility and the absence of carcinogenic effects are a few of its benefits. In addition, PLA can be obtained from renewable resources such as wheat, corn, and rice; thus, its production requires 25-55% less energy than conventional petrol deriving polymers. Lastly, PLA's degradation products are also non-toxic to human applications and the environment [11]. Recently, old challenges have been addressed from new perspectives, driven by advances in related disciplines aiming to take microencapsulation technology one step forward [12]. This review article aims to present the latest microencapsulated formulations based on biodegradable and biocompatible polymers, such as PLA, PLGA, and their copolymers, on a research level. MP Preparation Techniques at a Glance Throughout the years, a diverse number of techniques have been developed and can be employed for the preparation of microparticles for drug delivery applications, leading to a great variety of morphologies, structures, and size ranges. They are based on either physicochemical (i.e., emulsion solvent evaporation methods, electrospraying), chemical (i.e., polymerization), or mechanical (i.e., spray drying, microfluidics, supercritical fluid) processes. The most frequently used approach for nano-/microparticles manufacturing is emulsification-solvent evaporation (including double or multiple emulsions). It is a simple, low-cost, fast, and reproducible technique that allows for adjustable particle size by altering the viscosity of organic/aqueous phases, the homogenization speed, and the concentration of the emulsifier. The general principle is the emulsification of a polymer solution (dissolved in an organic solvent, typically chloroform or dichloromethane) in an aqueous continuous phase (supplied with an emulsifier, e.g., poly(vinyl alcohol)) by mechanical agitation until the solvent partitions into the aqueous phase and is removed by evaporation. The microspheres are then recovered by centrifugation and/or filtration, and lyophilized. The single oil-in-water (o/w) emulsion technique ( Figure 1) is generally applied for the encapsulation of hydrophobic or poorly water-soluble active ingredients. In the case of hydrophilic agents, which suffer from low encapsulation efficiency because of rapid drug partitioning into the external aqueous phase when using single emulsions, double or multiple emulsions (most commonly a water-in-oil-in-water, w/o/w, emulsion) are adopted [8]. However, the large amounts of organic solvents required, the narrow and uniform size distributions of the particles obtained, and the difficulty of scaling up are the main drawbacks that limit the use of this process at the industrial level [13]. Spray drying and electrospray represent two recent and appealing strategies for microencapsulation of active compounds in the pharmaceutical, food, and cosmetic fields. The spray drying mechanism is based on the atomizing and subsequent drying of a feed (i.e., a liquid, S/O or W/O, dispersion of particles solution) by spraying it into a hot drying medium (air, inert gas, such as nitrogen). The process is generally divided into three steps: (i) atomization of the feed into small droplets via an atomizer, (ii) drying of the droplets upon contact with the drying gas and particle formation, and (iii) separation of the dry particles from the drying medium [14]. Lately, the technique has been successfully employed for the preparation of inhalable formulations suitable for pulmonary drug delivery [15,16]. However, adhering particles to the inner wall of the spray-dryer is the major drawback [13]. Electrospray (ES) (Figure 2i) has also been recently introduced as a low-cost and effective alternative approach for MP production, offering greater control over particle size, higher drug encapsulation efficiency, less residue generation, and the need for smaller amounts of solvents [17]. It is an electrohydrodynamic process in which monodisperse droplets are formed by leading a liquid of sufficient electrical conductivity through a capillary channel or nozzle to a high potential. The final MP size can be tuned by many variable parameters, such as the electrostatic field strength, the needle size, the solution flow rate, the concentration etc. [18]. In the same sense, microfluidic technologies are currently a powerful tool to generate microparticles with high monodispersity, precisely tunable structures, and excellent encapsulation efficiency [19]. The technique enables precise manipulation of micro-flows in the channels of a micronized chip to produce uniform picoliter emulsion droplets [20]. Microfluidic chips are either lab-made or commercial products made of glass, polymers, or polydimethylsiloxane (PDMS). They can be classified into co-flow capillary devices, flowfocusing capillary devices, and the combination of these two principles (Figure 2ii). Despite the limited production scale of a single microfluidic device, scaling up of this technology is feasible by simultaneously operating multiple microfluidic devices in parallel [21]. (i) Different structures prepared by the electrospraying method: (a) hollow sphere, (b) spherical particle with smooth surface, (c) hollow particles, (d) electrospray droplets (f) electrically conducting substrates and (e) enteric coated particles. (ii) Schematic demonstration of microfluidic devices for fabricating monodisperse microparticles: (a) co-flow capillary device, (b) flow-focusing capillary device, and (c) a double emulsion capillary microfluidic device that combines co-flow and flow focusing. Adapted with permission from ref. [22,23]. 2021, Elsevier. Recently, the supercritical fluid (SCF) method has also been extensively applied to prepare nano/microparticles. Among the various SCFs used, supercritical carbon dioxide (scCO 2 ) has received special attention as an alternative green candidate for the replacement of conventional organic solvents and the implementation of mild, environmentally friendly process conditions. The scCO 2 , with its relatively low critical temperature (31.1 • C) and critical pressure (73.8 bar), presents the unique ability to dissolve certain polymers [24]. During the SCF methods, the particles are formed by the rapid decompression occurring due to the depressurization of a polymer solution (priory dissolved in scCO 2 ) through a nozzle into a lower pressure environment. Again, the pressure, nozzle diameter, solution concentration, and temperature are critical process parameters that can affect the final particle properties. There is currently a broad variety of techniques involving supercritical fluids (Table 1), some of them being rapid expansion of supercritical solutions (RESS), gas antisolvent process (GAS), supercritical antisolvent process (SAS), and solution enhanced dispersion by supercritical fluids (SEDS) [25]. The selection of the most appropriate processing technique depends greatly on the interaction of the supercritical CO 2 with the active ingredient, the polymeric coating material of interest, and a suitable solvent [26]. Table 1. Overview of advantages and limitations of the SC-CO 2 -based techniques [27]. Overview of Release Mechanisms The term "release mechanism" can be described as the way in which drug molecules are released or transported and as a description of the process or event that controls the release rate [33]. Researchers, in most cases, use the term release mechanism when referring to the process that determines the release rate of the drug. Consequently, describing the process of monitoring the release rate is more enlightening than describing the method of drug release when it comes to how drug release can be modified [34]. Factors such as polymer composition (glycolic/lactic acid ratio), pH, hydrophilicity in the backbone, water absorption, glass transition temperature (Tg), and average molecular weight can modulate the degradation rate and erosion of the particles [35]. From the above, it is clear that data on the physicochemical processes that influence the release rate and knowledge of the release mechanisms are of great significance prior to developing controlled-release DDSs [36,37]. There are only three possible ways of drug release from a PLA-based DDS: (i) transport through the polymer, (ii) transport through water-filled pores, and (iii) due to dissolution of the encapsulating polymer (which does not require drug transport). Most commonly, the encapsulated drug is a large molecule of protein or peptide, too large and too hydrophilic to be transported through the polymer phase. For this reason, transport through water-filled pores is the most common way of release observed in PLA and PLGA microparticle systems. Transport through the polymer phase may happen when the drug is hydrophobic and small [34]. In most cases, more than one mechanism takes place in microparticle drug delivery systems. The three basic ways of drug release lead to three different mechanisms in drug release from polymer microparticle systems: (i) release through diffusion from the surface of particles, (ii) release through erosion of particles due to polymer degradation, and (iii) release through the swollen polymer matrix, as illustrated in Figure 3 [35,38]. Diffusion Diffusion takes place when drug molecules are dissolved in fluids of the human body around or within the particles and migrate away from the particles [39]. The release rate is frequently considered to be diffusion controlled initially and degradation/erosion controlled during the last stage of the release period [40]. Almost always in a drug delivery system, diffusional mass transport is involved. In various cases, the ascendent step is drug diffusion, while in others it "only" plays a major role, e.g., in combination with polymer swelling or polymer degradation/matrix erosion. In specific cases, it even constitutes only a negligible role. Fick's laws of diffusion can be used to quantify the diffused mass during the release (Equation (1)). Fick's first law of diffusion: where F is the rate of transfer per unit area of section (flux); c is the concentration of the diffusing species and denotes the diffusion coefficient (also called diffusivity). Fick's second of diffusion can be derived from the first one and mass balance considerations (Equation (2)): where c denotes the concentration of the diffusing species; t is time, D stands for the diffusion coefficient; and x, y, and z are the three spatial (Cartesian) coordinates [41,42]. Porous microparticles from PLLA/PDLA 7/3 ratio prepared by Yu et al. manifested rapid initial release of rifampicin compared to microspheres without pores, since the cracks on the surface can considerably boost the diffusive escape of the drug and endorse the hydrolysis of amorphous PLA. In addition, it was observed that a closed surface significantly delays the diffusion of rifampicin from particles, while the relatively low specific surface area and high molecular weight of PLLA cause hydrolysis speed to be low, thus delaying the release of rifampicin as well. In conclusion, the drug release behavior of PLA microspheres can be adjusted by simply modifying the ratio of PLLA to PDLA. Therefore, the combination of various types of PLLA/PDLA microspheres can regulate the drug release rate based on the actual demands [43]. Erosion Erosion, i.e., mass loss of the polymer, is initiated when the dissolved polymer degradation products are able to diffuse into the release area [38]. PLGA is usually subjected to bulk erosion, in contrast to surface erosion, as PLGA is relatively swiftly hydrated. Dissolution of polymer degradation products and erosion lead to pore formation. Gradually, the size of the formed pores begins to increase, as water causes hydrolysis, and the produced acids catalyze degradation and cause polymer dissolution inside the pores, leading to subsequent erosion. Small pores consequently grow, and eventually join neighboring pores to form fewer, larger pores [38,44]. Degradation occurs when the polymer chains hydrolyze into lower molecular weight chains, successfully releasing drug molecules that are confined by the polymer chains [39]. Hydrolysis, i.e., the scission of ester bonds and successive decrease in Mw, starts immediately when water or aquatic medium penetrates into the drug delivery devices [45,46]. In an in vivo environment, the polyester backbone structures of PLA and PLGA experience hydrolysis and generate biocompatible components (glycolic acid and lactic acid) that are eradicated from the human body through the citric acid cycle. Normal physiological functions are not affected by the degradation products [35]. This autocatalytic phenomenon is known to induce heterogeneous degradation inside PLGA matrice i.e., quicker degradation at the center of the PLGA matrix than at the shell, since acid oligomers are entrapped in the center of microparticles [47,48]. For the biodegradable polymer matrix, release is usually controlled by the hydrolytic cleavage of polymer chains that result in matrix erosion, even though diffusion might still be dominant when the erosion is slow [49]. Swelling In some cases, apart from diffusion and matrix erosion of microspheres, the release mechanism is more complicated, and in a recent study, the swelling behavior of PLGA microparticles was further investigated [50]. Swelling-controlled release systems were originally dry. PLGA systems with agile polymer chains, when located inside the body, tend to absorb a significant quantity of water, and swell to grow inside pressure and porosity, allowing drug molecules to diffuse from the swollen network. As the volume of water inside increases over time, any noteworthy uprise in pressure will probably be compensated for by swelling and rearrangement of the polymer chains. The release of active drug molecules can also be varied over a specific period of time based on external and internal factors [34,39]. In the case of initially porous system surfaces, release of the drug molecules through these pores might be rapid at the beginning, leading to a "burst effect". However, polymer swelling can close these pores and subsequently slow down the process of drug release. Furthermore, swelling has also been acknowledged as the main reason for the onset of the final rapid drug release phase from various systems of PLGA-based microparticles. Usually, the latter exhibit tri-phasic drug release patterns; an initial rapid release phase ("burst release"), followed by a phase with an approximately stable drug release rate (sometimes even close to zero), and a final, again rapid drug release phase (leading to complete drug exhaust) has been observed in studies. Scrutinizing the swelling and drug release behavior of "single microparticles" indicated that the onset of substantial microparticle swelling (after a certain lag time) concurred with the onset of the final rapid drug release phase [42,51]. Microparticles were incubated at 37 • C in phosphate buffered saline (PBS) solution (pH 7.4), and their swelling was observed for up to 50 days. It appeared that microparticles started to decrease in size after 5 days of incubation and gained their original size after 10 days of incubation due to water absorption. The swelling index is lower in microparticles with diameter size < 15 µm (49-51%) and much higher for microparticles with diameter sizes > 15 µm (82-83%). In all studied microparticles, their diameter at days between 15 and 30 started to increase, reaching their highest size by day 30, and after that time, a reduction was observed. (Figure 4i). However, since PLGA oligomers with molecular weights around 1100 are water soluble [48], they can slowly escape out of the particles through a diffusion-controlled mechanism, and thus a dissipation of the acidic core was eventually reported after 15 days (Figure 4(iib)). Microchannels may form, and thus water can enter inside microparticles (Figure 4(iiib)). With swelling starting after the 15th day and lasting up to 30 days, when surface erosion takes place progressively. PLA Microparticles PLA offers several advantages, such as biodegradability and biocompatibility, which constitute an ideal vehicle for parenteral controlled drug delivery systems. Furthermore, PLA microparticles can control drug release rates for several time periods, lasting from a few days to several weeks up to a year depending on the molecular weight of PLA, MP size, drug loading, solubility, and diffusion ability [52]. PLA microparticles can be easily prepared, mainly by emulsion solvent evaporation techniques, and after solvent removal microspheres are hardening encapsulating hydrophilic or hydrophobic drugs. Drug release is mainly controlled by diffused mechanisms from the insoluble matrix [53]. Several drugs have been encapsulated in PLA MPs in recent years, aiming at their storage stability, enhanced bioavailability, and sustained or prolonged release profiles [54]. Mildronate is a cardioprotective drug with a highly hygroscopic nature and excellent bioavailability. Its hygroscopic character means that the pharmaceutical product requires specific package requirements and storage conditions. To overcome these inconveniences, Loca et al. studied the microencapsulation of mildronate in PLA matrices through a double emulsion application [55]. The PLA drug-loaded microcapsules were comprised of a fairly homogenous mixture of both polymer and drug. PLA was found to act as a watertight coating membrane, and long-term, it decreased the hygroscopicity of mildronate by more than two times, whereas the physical state of the drug and its in vivo release behavior were not substantially affected. In another case, the conjugation of drugs onto the distal -OH end groups of the PLA backbone was reported to prepare pharmacologically active polymeric systems that impart enhanced solubility and stability of the conjugates and provide an opportunity for combination drug delivery [56]. PLA injectable microparticle formulations were found to prolong drug release behavior compared to their PLGA counterparts in a study where bupivacaine drug has been encapsulated to be used as a local anesthetic [57]. In addition, it was found that drug release is directly dependent on particle size and drug feed ratios. Microparticles with higher drug loading and sizes have much higher prolonged periods. In general, the molecular weight of PLA is a crucial factor during the encapsulation process of active ingredients. In that sense, Chaiyasat et al. worked on the encapsulation of Vitamin E on poly(L-lactic acid) (PLLA) microspheres prepared by the oil-in-water emulsion/solvent evaporation technique, while changing the PLLA/Vitamin E weight ratio and also by using different molecular weights of PLLA [58]. It was found that the optimum ratio for the formation of microparticles was 25:1 PLLA/Vitamin E, whereas low molecular weight PLLA exhibited a poorer carrier capacity compared to higher molecular weight PLLA that could better envelop the vitamin in its interior. Addressing the issue of drug solubility in a recent study, paliperidone, an antipsychotic drug used in patients suffering from bipolar disorder with high hydrophobicity, was first loaded in a high surface area mesoporous silica foam (MCF) with cellular pore morphology [59]. The aim was to enhance paliperidone solubility and simultaneously to prepare long active intractable microspheres. It was found that paliperidone, after its adsorption into MCF, was transformed in its amorphous state, thus leading to an enhanced in vitro dissolution profile. Furthermore, incorporation of the drug-loaded MCF into polymeric microparticles (PLA and PLGA) prolonged the release time of paliperidone from 10 to 15 days. In a similar study by Nanaki et al., a hybrid system for the intranasal delivery of paliperidone [60], was also studied. Paliperidone was first incorporated into MCF by adsorption, then encapsulated the MCF-drug system into PLA microspheres, which were eventually coated with thiolated chitosan. SEM images of Thiolated_PLA_MCF_Pal microparticles indicated that their sizes varied between 3-6 µm. TEM images also showed that paliperidone was detected inside the microspheres ( Figure 5a) while a film on the surface of microparticles was formed from the thiolated chitosan ( Figure 5b). Drug release studies showed that paliperidone is a hydrophobic drug with low solubility since only 10% of it was dissolved in the first hour without any further dissolution until 20 days (Figure 5c). When it is incorporated into MCF nanopores, the dissolution rate is substantially enhanced due to drug amorphization. Their addition to PLA and PLGA microparticles produces controlled released formulations up to 22-24 days (Figure 5d). However, the release rate is lower in PLA microspheres than PLGA due to the lower glass transition that PLGA has compared to PLA polymers. Except for double emulsion techniques, other approaches have also been mentioned for the preparation of drug-loaded PLA MPs. In such an attempt, sustained release PLA microparticles of agomelatine drug have been prepared by using a solvent evaporation method combined with wet milling technology [61]. From dissolution experiments, it was found that agomelatine could be sustained release over the period of one month. The initial release mechanism was diffusion followed by erosion of the polymer matrix due to hydrolysis of PLA. The rapid expansion of supercritical solutions (RESS) process has also been successfully employed by Vegara-Mendoza et al. to encapsulate coenzyme Q 10 (coQ 10 ), used for the prevention of cancer and neurodegenerative diseases, in PLA microcapsules [62]. Analysis of the effect of the PLA/coQ 10 ratio and the cosolvent on microcapsule properties was conducted, indicating that both morphology and the size of microcapsules have been completely influenced by the PLA/coQ 10 ratio. Similar shapes to the coQ 10 have been obtained when the same concentrations were used. However, an increase in particle diameter was observed when the concentration of both materials was raised. Although the cosolvent in the supercritical system influences the particle size and the coQ 10 solubility, it does not exhibit any effect on the morphology of the microcapsules or any interactions with PLA. Tasci et al. also worked beyond conventional methods of polymeric microparticle manufacturing in order to overcome some of their drawbacks, such as, for example, the time-consuming experimental procedures, among others. In their case, the electrospray method was used [18]. The formation of particles was performed using PLA solutions of three different organic solvents, namely dichloromethane, chloroform, and chloroformethanol mixture. The sizes and morphology of the prepared microparticles were optimized via well-controlled flow rates, applied voltages, and solvent concentration. It has been shown that DCM was the most suitable solvent for obtaining microparticles of a uniform spherical shape with an average diameter of 3.00 µm, and a highly porous structure ( Figure 6). With regard to shape control, the morphology of prepared PLGA/PLA has recently been compared using a microhomogenizer and a membrane emulsification technique [63]. It was found that the prepared particles were polydisperse and irregular in shapes using the classical microhomogenizer, while these prepared by membrane emulsification technique were very spherical and monodisperse. These differences also lead to completely different release profiles of the rifampicin drug, which is much faster in the first case and almost sustained and extended up to 14 days when the membrane emulsification technique was used. Likewise, Kudryavtseva et al. emphasized that customized shapes of microcapsules show clear advantages over spherical ones, having enhanced internalization by host cells, improved flow characteristics, and higher packing capacity [64]. A method for "definedshape" polymer capsule fabrication aspired from the traditional pelmeni-dumplings from the cooking process was proposed (Figure 7). PLA microcapsules of two different approaches, both having monodisperse size and shape distribution with about 7 µm long torpedo-like shape, were also fabricated. FeCl 2 ground crystals, Fe 3 O 4 nanopowder and carboxyfluorescein, were used as model cargoes for the prepared microcapsules ( Figure 8). The study successfully demonstrated a welldefined core-shell structure, having high loading capacity, good cytocompatibility, and internalization by cells without causing toxic effects. Moreover, precise control over the microcapsule's geometry has been successfully reached, providing significant flexibility for the choice of active cargoes, independent of their solubility and molecular weight. The microcapsule's shell accordingly defines the capsule's geometry, protects the cargo, and modulates its release, which may lead to a wide variety of different drug delivery strategies, whereas the co-encapsulation and surface modification can further facilitate their application according to targeted needs. The shape/structure and size parameters are also very central in the work conducted by Ma's research group toward the design of more stable DDS [65]. In their study, the strategies for maintaining the bioactivity of protein drugs during preparation and drug release have been analyzed and evaluated for future applications. First, the issue of controlling the size and uniformity of the microparticles was addressed. In this sense, a membrane emulsification process was developed by dissolving PLA on dichloromethane, acting as the dispersed phase, with the continuous phase being water containing poly(vinyl alcohol) (PVA) and sulfate. The diameter by direct membrane emulsification process could be effectively controlled from submicron to 100 µm, and size distribution (CV value) was estimated at around 10%. Maintaining the bioactivity of the encapsulated molecules by stabilizing their 3 and 4 dimensional structures was then a further challenge. For that, several different strategies were followed, such as adding additives into protein solution, using solid drug powder instead of protein solution, a rapid self-solidification process, a step-wise crosslinking process, and dispersing drug powder in oil phase to decrease the contact of protein molecules and hydrophobic interface, to mention a few. In addition, it was concluded that the mild membrane emulsification process does not create high shear forces capable of altering the protein structure. The addition of stabilizing substances, such as HP-β-CD (hydroxypropyl-β-cyclodextrin), PVA, and PVP (polyvinylpyrrolidone), in the inner water phase can maintain bioactivity satisfactorily. The resulting PLA microspheres were able to maintain the bioactivity of the encapsulated proteins and sustain their release for about a month. Such microspheres can be used to carry protein molecules, such as insulin, recombinant human growth hormone (rhGH), and other similar molecules. Working also with biomolecules, Icart et al. focused on the microencapsulation of glucagon-like peptide-1 (GLP1) in PLA matrices using the double emulsion-solvent evaporation technique, where particles were formed by solvent evaporation under constant stirring and were collected through centrifugation [66]. Glucagon-like peptide-1 (GLP1) is a naturally occurring peptide used in cardiovascular and weight loss applications; however, its clinical application is still limited due to a half-life of 2 min as a result of rapid enzymic degradation. The results of the study showed that PLA had a determining effect on the peptide in in vitro release profile. More specifically, the release occurred in three phases, starting with an initial burst release of surface drug concentration, followed by a slow-release rate until the polymeric microparticle matrix was degraded, resulting in an accelerated-release phase by the end of the degradation process that lasted for a total of 25 days. This in vitro case study was further backed up with evidence through in vivo experiments, and it strongly suggests that hGLP1-loaded PLA microparticles can provide sustained drug release for weeks and be potentially useful for clinical applications. Recent developments in stem cell-based therapy methods have utilized the dual microencapsulation of stem cells, targeting bone repair by introducing bone marrow mesenchymal stem cells (BM-MSCs) and bone morphogenetic protein-2 (BMP-2) to the affected area in order to repair large bone defects. The research of Kong et al. [67] describes the formation of a multicore microcapsule with the internal core, enclosing BM-MSCs, made of sodium alginate and the outer shell made of PLA with encapsulated BMP-2. The microcapsules were formed through electrospraying. The sustained release of the combination of stem cells and proteins for 30 days in vitro encouraged Kong et al. to progress to the in vivo treatment of bone structures in rats. The transplantation of the microcapsules into the affected tissues led to sufficient repair of the bone after 4 to 8 weeks. Thus, the research reached the conclusion that the transplantation of BM-MSCs and BMP-2 in multicore PLA microcapsules is a promising strategy for regenerative therapies of large bone defects. Lastly, long-acting injectable (LAI) microspheres are a fascinating category of formulations, functioning as depot systems for effective drug administration, providing enhanced stability and bioavailability, improved efficiency, and patient compliance. These systems are also appropriate for encapsulation of sensitive active agents, such as peptides and proteins. According to Butreddy et al. in their relevant review article [68], PLA/PLGA-based LAI microspheres can have a really positive impact on the delivery of proteins/peptides. They are able to deliver drugs to targeted areas, achieving higher drug concentrations on-spot and reduced systemic exposure. The LAI microspheres must have uniform size distribution at large scale production and must retain consistent bioactivity of the encapsulated drug during preparation, storage, and release. Emulsion-solvent evaporation, coacervation, and spray drying are three important manufacturing techniques, with emulsion-solvent evaporation being currently the most reliable one in clinical LAI microspheres preparation. PLGA Microparticles The combination of lactic acid (LA) with glycolic acid (GA) results in a copolymer system known as PLGA, a very attractive and powerful option and one of the most established ones in the pharmaceutical field. PLGA systems are comprehensively examined as sustainable drug delivery systems due to their biodegradability, biocompatibility, morphology, particle size, and sustained drug release properties in various in vivo and in vitro systems. Its polymeric properties, such as its glass transition temperature (Tg) and degradation rate, can also be fine-tuned by modifying the LA:GA content ratio [27]. At the moment, approximately 20 PLGA-based products are commercially available with the approval of the US Food and Drug Administration FDA and the European Medicines Agency EMA and are used as delivery vehicles for drugs, proteins, and numerous macromolecules in therapeutic applications [69,70]. PLGA microspheres loaded with the gefitinib drug were prepared using an oil-in-water solvent evaporation method, obtaining different particle sizes of 5 ± 1, 32 ± 4, 70 ± 3, and 130 ± 7 µm [71]. Encapsulation efficiency of gefitinib, loading content, and microspheres yields are all increasing by increasing the particle size of prepared microspheres. In vitro drug release studies showed that microspheres with sizes smaller than 50 µm have a rapid diffusion-based release, reaching completion within 2 days. For such low particle sizes, their high surface area per unit volume leads to a higher rate of water permeation and thus to higher matrix degradation rates (Figure 9a). Larger microspheres, however, showed a sigmoidal release pattern that continued for three months in which diffusion (early stage), as well as particle erosion (later stage) governed drug release (Figure 9b). Therefore, it is clear that the two main mechanisms that drive drug release from PLGA microspheres are diffusion and degradation/erosion [72]. for the fabrication of PLGA microparticles for drug delivery systems. Its tunable properties above critical temperature and pressure provide control of the particle size, particle morphology, and drug loading [27]. A combination of various highly porous PLGA MPs supported by an in silico nonlinear first-order model in order to predict and obtain transitional situations and tunable release of curcumin, while guaranteeing extended or rapid drug release [73]. Three different configurations were obtained (CUR-NE, CUR-oil, and CUR-water) in order to produce microspheres with curcumin molecules embedded inside or outside the porous structures. This approach can be applied to other molecules and drugs, giving the option of avoiding additional experiments. In that way, the drug release will occur with a controlled timing, in a tunable amount, optimizing the therapeutic effectiveness and, therefore, decreasing potential side effects. In another study, Molavi et al. incorporated nucleophilic (risperidone) and basic (olanzapine) drugs into PLGA microspheres [13]. The results demonstrated significantly high polymer degradation, while the release profile of risperidone was biphasic, and in contrast, the Mw of the placebo microspheres was completely unchanged. Furthermore, the results and rapid initial release were attributed to the reaction of the weak basic drug and the acidic polymer. A plateau was reached through hydrolysis of produced acidic monomers after those reactions of neutralization of basic drugs took place. These findings were proven at high risperidone and olanzapine loadings. Ding et al. prepared chitosan-coated PLGA microparticles with controlled diameters from 5 to 120 µm using capillary microfluidic techniques for ocular drug delivery systems [74]. Severe ocular environments highlight the necessity for mucoadhesive microparticulates with the ability to survive such harsh conditions. Furthermore, studies have shown that the presence of such particles is prolonged on corneal surfaces, and thus, the bioavailability of drug systems topically applied to the eyes is significantly enhanced. PLGA microparticles loaded with Ovalbumin (OVA), a protein traced mainly in eggs, were fabricated onto hydrogel-forming microneedle arrays with the use of electrohydrodynamic atomization (EHDA) by Angkawinitwong et al. [75] (Figure 10). An extended release of ovalbumin over ca. 28 days was subsequently recorded, while similar mechanical characteristics and insertion properties to the uncoated system were manifested by the particles. Thus, the possibility of using EHDA to coat a microneedle array seems very prospective as a novel noninvasive protein delivery strategy for transdermal applications. Concerning the administration of biologics, parenteral routes, such as intravenous and/or intramuscular injections, have been mostly used due to their low oral bioavailability and stability in the gastrointestinal tract, which can negatively affect patient compliance. As an alternative non-invasive route for the administration of pharmaceutical ingredients, recently pharmaceutical researchers have focused their studies on pulmonary delivery [76]. The pulmonary route is an attractive target for both local and systemic drug delivery due to benefits, such as rich blood supply, large surface area, and absence of first-pass metabolism. Inhalable PLGA MPs have been widely studied in an attempt to find strategies for prolonged release of drugs in the lung [35] ( Table 2). The small size of the inhalable particles is highly necessary for appropriate lung deposition but may result in low drug encapsulation into MPs. After the drug is administrated, its dispersal in the lung and preservation in the favorable site of the lung is significant for the treatment to be effective. Chlorhexidine diacetate (CDA) and digluconate (CDG) are the most frequent salts used in dental care. PLGA microparticles containing CDA and CDG were prepared successfully by Sousa et al. [86]. PLGA microparticles containing diacetate salt demonstrated a viable and homogeneous drug release over 120 days, while digluconate salt solution showed a more immediate release and neither is subject to thermal degradation. Both MPs had an antimicrobial effect against Streptococcus mutans, and their drug release profiles demonstrated the systems' suitability to control these bacteria in the oral environment using PLGA MPs containing CDA or CDG. Although it is very efficient in the control of this microorganism, it presents some disadvantages, such as tongue discoloration, taste alterations, and teeth restoration staining. PLGA microparticles containing CDA proved to be feasible and could be incorporated in temporary restorative dental materials. Jamaledin et al. proposed the use of MPs made of PLGA to encapsulate fd bacteriophage for the first time. For bacteriophage-PLGA MP synthesis, the water in oil in water (w 1 /o/w 2 ) emulsion technique was used ( Figure 11). The immunogenicity of the encapsulated bacteriophage after being released by MPs was demonstrated using recombinant filamentous bacteriophages expressing the ovalbumin (OVA) antigenic determinant. Their results revealed that encapsulated bacteriophages remained stable and maintained their immunogenic properties [2]. Artemether, a highly efficient antimalarial drug, possesses the potential to treat patients tormented by Plasmodium falciparum (P.F). However, poor therapeutic effects can be caused by the fact that artemether may be degraded rapidly by stomach acids and cleared quickly from the body after oral administration. Aiming to solve this problem, artemether is combined with piperine (AP), a naturally occurring alkaloid and excellent bio-enhancer, which can enhance the bioavailability of numerous drugs and other phytochemicals. Ali et al. fabricated two types of core-shell MPs based on PLGA and chitosan (CS) by using a coaxial electrospray system (CES) loaded with both artemether and piperine to achieve a sustained drug release [87]. The PLGA or PLGA-CS shell caused improved sustained drug release behavior in both types of MPs. The shell protected the fast degradation of artemether caused by acidic gastric juice. The results indicated that AP-PLGA-CS and AP-PLGA microparticles embrace the potential within the application of malaria treatment and provide a promising platform to encapsulate multiple drugs in polymeric particles for drug delivery. Multiple drugs have also been microencapsulated in PLGA scaffolds by Aina et al. [88]. Specifically, Metronidazole, Paracetamol, and Sulphapyridine were encapsulated into PLGA scaffolds and then studied using the X-Ray Powder Diffraction (XRPD) technique. Changes in the diffraction patterns of those scaffolds after encapsulation suggested a chemical interaction between the pure drugs and the scaffolds. The drugs can be encapsulated in the scaffolds even at low concentrations, as permitted by their aqueous solubilities. No physical intermixture was observed. Very recently, Kulkarni et al. aimed to create a drug delivery and release system that may be helpful as a novel therapeutic and anticancer system to slow down the development of diseases that are associated with oxidative stress [89]. Corn silk contains flavonoids that contribute to its antioxidant and anticancer activity. An efficient delivery system was fabricated using the solvent extraction method to incorporate anticancer methanolic corn silk extract. Spherical and relatively small (d = 485.9) polymeric microparticles were obtained containing flavonoids with encapsulation efficiency (EE) of 60.66%. MTT cell viability assay was performed on HeLa, NIH 3T3 cell lines, and the cellular uptake of the drug was studied using fluorescence microscopy, confirming the uptake by the cells within 24 h of treatment. The MPs were proven non-toxic to normal cells, and the system provided protection and controlled release of the bioactive compounds. Zhou et al. investigated the controlling factors for cytotoxicity, photothermal, and anti-tumor effects of biodegradable magnesium poly(lactic-co-glycolic acid) (Figure 12), in vitro and in vivo. MgPLGA microspheres were made by microfluidic emulsification and demonstrated high Mg encapsulation efficiency, 87%. The photothermal and antitumor effects of MgPLGA spheres were determined by their Mg content, irrelevant to their structural features and size, as shown in in vitro cell assays and in vivo mice models. These results provided important implications for designing and fabricating stimuli-responsive drug delivery vehicles [90]. Subha et al. used beeswax and PLGA as wall materials in order to obtain sustained and controlled release of Capecitabine, an anticancer drug [91]. First, the beeswax microspheres loaded with the drug were formed using the melt dispersion technique, and then were dispersed within the drug-incorporated PLGA. Consequently, the beeswax microspheres create the inner core of the drug delivery system. The beeswax vehicles were designed via the coacervation and phase separation method and had a diameter of 3441 nm. The drug release from the microcapsules was tested in simulated gastric fluid, and a slow and controlled release of the drug was observed (PH 6.8 at 37 • C). Approximately 25% of the drug was released in 24 h, concluding that beeswax microspheres could be considered as potential drug delivery vehicles and drug targeting in cancer therapy. In a recent study, SBA-15 mesoporous silica was loaded with paclitaxel (PTX) anticancer drug and was then encapsulated in two different copolymers of PLGA (50/50 and 75/25 w/w), forming composite microspheres appropriate for topical injection [92]. The target was to increase the drug loading capability and to enhance its solubility. The TEM micrographs shown in Figure 13(ia) verify the well-formed structure of the SBA-15 mesoporous silica that was used in this study, which was loaded with PTX and encapsulated in PLGA microparticles, which have spherical morphology with sizes of 8-12 µm (Figure 13(ib)). It was found that the molar ratio of LA/GA in copolymers (i.e., PLGA 50/50 and 75/25 w/w) have no significant effect on the sizes of prepared microspheres. A more precise observation in Figure 13(ic) shows that PTX/SBA-15 was successfully embedded into the microparticles and can be observed as black shadows in the respective images. From dissolution profiles, it is clear that neat PTX has a very low solubility since even after 12 days in SBF medium its dissolution is less than 20% (Figure 13(iia)), while when adsorbed onto SBA-15, its solubility was significantly improved since the whole amount of PTX was dissolved during day 12. This behavior is associated with the amorphization of PTX, as was proved by DSC and XRD studies. An enhancement was also observed when PTX was encapsulated in PLGA microspheres (Figure 13(iib)). The PLGA copolymer slightly affected the release rate, which is higher in PLGA 50/50 than PLGA 75/25 w/w, due to the lower Tg of the first copolymer, as well as due to the high drug loading. A multiphasic release was observed from both copolymers with an initial burst release for the first 1-2 days, followed by a slower sustained release up to 3 weeks. This behavior was changed when the SBA-15 loaded with PTX was microencapsulated into the PLGA matrix ( Figure 13(iic,d)). The burst release noticed in the first 2 days is lower due to the slower diffusion rate of PTX from SBA-15. Additionally, the release rate from both composites is more sustained compared with neat copolymers. Microparticles Based on Amphiphilic PLA/PEG and PLGA/PEG Copolymers Amphiphilic biodegradable and bioresorbable microparticles consisting of hydrophilichydrophobic parts have gained significant attention across multiple fields of biochemistry for direct drug delivery because of their useful rheological, biomedical, and mechanical properties. Amphiphilic copolymers tend to self-assemble into core-shell micelles with a hydrophobic center and a hydrophilic external surface in aqueous solution. PLA's high hydrophobicity makes it an excellent biomaterial for such applications, while poly(ethylene glycol) (PEG) is a biocompatible hydrophilic polymer used extensively in many applications in pharmaceutical technology. Such poly(L-lactide)/poly(ethylene glycol) (PLLA/PEG) copolymers can be easily prepared by ring opening polymerization of lactide in the presence of PEG and a catalyst, usually stannous octoate [93]. Hydroxyl end groups can act as initiators for lactide polymerization, and according to this procedure diblock or triblock copolymers can be prepared. Ding et al. synthesized such amphiphilic PLLA/PEG copolymers ( Figure 14) with various molecular weights for the delivery of ibuprofen (IBU), used for the treatment of a range of aches and pains [94]. Studies showed that by increasing the Mw of PEG, the encapsulation efficiency and drug loading content and of IBU were improved significantly in the PLLA-PEG micelles. Another interesting application of PLA-PEG diblock copolymer is the treatment of retinal diseases. Rafat et al. [95] encapsulated the Tat-EGFP protein (Tat; protein transduction of HIV-1 trans activator of transcription protein), which can effectively circumvent the retina's barriers. Four different ratios of PEG-PLA microparticles were developed by changing the ratio of the protein and the polymer. The microcapsules exhibited a burst release profile in the first days. As the polymer concentration was increased to a constant protein concentration, the release rate was reduced. Similar amphiphilic copolymers can be prepared by using PLGA instead of PLA. Aspirin from primary times of marketing approval became a drug effective for common inflammations, fever, and pain reduction. Liu et al. [96] loaded aspirin in PLGA-PEG-PLGA copolymers while additionally checking the impact of including organic montmorillonite (o-MMT) in microspheres. MPs containing higher concentrations of montmorillonite show a faster rate of drug release despite the irregular effects of the first few hours. Compared to copolymer microspheres, microspheres with o-MMT gave a better release profile and are recommended for research. The ABA tri-block copolymer (PLGA-PEG-PLGA) was used by Khodaverdi et al. [91] for insulin microencapsulation. Microspheres were created by the microwave heating technique without the use of organic solvents throughout the preparation. They observed regulated drug release for up to 3 weeks, at which point the polymeric matrix may have begun to degrade. However, in the case of encapsulation of very small molecules, such as clonidine, the choice of PLGA-PEG copolymers according to Gaignaux et al. [97] will not give satisfactory encapsulation results, as due to their high porosity, molecules can easily penetrate the hydrophilic network. Another interesting application of these copolymers is for cancer treatment. PEG can be used to modify microparticle surfaces in order to avoid their recognition by cells of the mononuclear phagocyte system, thus leading to an increased blood circulation time of microparticles [98]. These PLA/PEG amphiphilic carriers exhibit prolonged blood residence after their administration (long-circulating drug carriers), as well as passive targeting of tumors due to the leaky vasculature of many tumors and the 'enhanced permeability and retention' (EPR) effect [99,100]. Nevertheless, nanoparticles prepared from copolymers with high molecular weight PLA blocks exhibited longer blood circulation times after i.v. administration in rats than the nanoparticles prepared from copolymers with low molecular weight PLA blocks [101] and have been used extensively as drug delivery Microparticles (Table 3). Additionally, from in vivo studies performed in mice, it was found that blood circulation time could be increased by increasing the molecular weight of the PEG blocks in diblock PEG-PLGA copolymers, from 5000 to 20,000 g/mol [102]. These copolymers have long been established as ideal carriers for anticancer drugs [103][104][105]. Despite the generally positive outcomes, therapeutic agents have several drawbacks, including weak pharmacokinetics, unspecific bio-distribution, and limited targeting efficiency. When therapeutic agents are used in cancer treatment, low solubility and hydrophobicity are considered significant obstacles [106]. To overcome them, anticancer drugs, such as paclitaxel and doxorubicin, are encapsulated in amphiphilic copolymer micelles, such as PEG/PLLA. Huang et al. [107] used folic acid (FA) in conjunction with PEG terminal groups to form FA-PEG-PLLA for tumor targeted therapy (Figure 15). Owing to the overexpression of FA receptors in many cancer cell types, folic acid is one of the most commonly used targeting ligands [108]. Microparticles loaded with paclitaxel (PTX) were prepared using the solution-enhanced dispersion by supercritical fluids (SEDS) technique with CO 2 used as the supercritical fluid. The diameters of most of the particles were 1-3 µm, with a spherical shape and slight agglomeration. Results resembling Huang's previous work [107] were obtained when a PEG content of 25% (w/w) was used. Lumpy grains were grouped together, making them unsuitable for use as drug carriers. A PEG content increase from 10 to 15% w/w causes crystallization of paclitaxel outside the microspheres. Further increase of PEG content in the copolymer matrix decreases the drug loading, as well as the encapsulation efficiency of the developed microparticles. SEDS process was also used to create microspheres from a morphine-loaded PLLA-PEG-PLLA triblock copolymer. Morphine is a naturally occurring analgesic drug used for the relief of cancer pain. The work of Chen et al. [110] showed that the rate of encapsulation and release of the drug are a function of the percentage of poly(ethylene glycol) groups. The results indicate that triple block with 3% PEG ratio shows a better profile as it brings 80% release of morphine in 48 h. PTX was loaded into magnetic microspheres prepared by PLA-PEG copolymer with magnetite [111]. The microspheres were created by the solvent evaporation method with an average diameter of 21.73 ± 12.40 µm. The drug was released at a high rate in the first 5 days, with or without the application of an electric field, and was sustained for the upcoming periods under the application of a magnetic field. The goal of Ruan et al. [112] was to show that the PEG-PLA copolymer was effective against hydrophobic PLGA. A controlled release through carriers for no more than one month is required for an anticancer drug. It was concluded from the study that the release is more satisfactory in the case of the PLA-PEG-PLA copolymer, which is also affected by the presence of acetone. 5-Fluorouracil (5-FU) in combination with microspheres from mPEG-PLA was tested as an anticancer therapy by Xiong et al. [113]. Increasing the amount of diblock copolymer led to an increase in the rate of encapsulation of the drug in the microspheres, as due to the higher viscosity, the migration of the drug was avoided. However, increasing the amount of mPEG in the copolymer, on the other hand, had no such impact. The release of the drug was quite large in the first hours, as expected. In addition to drugs, other naturally occurring substances, such as perillyl alcohol (POH), a phenolic monoterpene, have been shown to block tumor cell cycles by inhibiting their migration. Its potential use in the treatment of glioma is indeed promising [114]. Marson et al. [115] worked with poly(D,L-lactic acid)-block-poly(ethylene glycol) (PLA-b-PEG) polymer-based carriers as a delivery platform for POH. Microcapsules were prepared by the solvent evaporation method with two different tensoactives: PVA and sodium cholate (SC). Both samples presented similar release profiles, displaying complete drug unloading after 3 h. [112] Another significant goal of the medical community is to develop myocardial regeneration systems for the treatment of myocardial infarction. The contribution of PLGA copolymers to this research has been demonstrated by Pascual-Gil and co-workers [117]. They prepared MPs from PLGA in combination with PEG to avoid mainly opsonization in blood, and the growth of factor-loaded particles, neuregulin (NRG), was administered to rats. PEGylation led to an increase in the retention of microparticles in the myocardium up to 12 weeks. A similar tactic was followed by Kirby et al. [118] for bone regeneration. In this case, bone morphogenetic protein 2 (BMP-2) was used as the growth factor. The triblock PLGA-PEG-PLGA acts as a plasticizer, while the hydrophilic environment that is created increases the hydrolysis of PLGA and at the same time the release of drug until day 10. Lysozyme was used by Tran et al. [119] as the original protein, which exhibits behaviors similar to growth factors for tissue engineering. Due to the high cost of the growth factors, a polymer system is required to have a good release profile while loaded with a small amount of them. This work is also an indication of the large contribution of PEG groups, as a release of up to 58% was observed until the 8th day in the triblock with PLGA20PEG20 ratio. Furthermore, Li et al. [116] suggested the triblock copolymer poly(lactic acid)-poly(ethylene glycol)-poly(lactic acid) (PLA-PEG-PLA), PELA for bone regeneration. Bone morphogenetic protein (BMP) was encapsulated in the copolymer in microspheres. The maximum EE (%) was obtained when the Mw of PEG was 4000 Da (27.6%) due to the hydrophophilicity increase. EE also reached its highest value (26%) when the amount of PELA was 330 mg. All these applications have been summarized on Table 4. PLCL Microparticles Poly(ε-caprolactone) is a semicrystalline polymer with rubbery properties that exhibits good biodegradability, biocompatibility, and permeation to drugs and thus has attracted great attention for medical applications [120]. Copolymers of L-lactide and ε-caprolactone (PLCL), offer great permeability, better efficiency of degradation rate, thermal and mechanical features that can enhance their processing, provide long-term delivery, and present great interest for immunosuppressive drugs, as slowly degradable materials. Materials obtained from poly(ε-caprolactone) undergo slower degradation than PLGA, and poly(D,Llactide) PDLA and may be used in systems that provide drug delivery even extending over a period of more than one year [120,121]. For instance, copolymers from ε-CL and L-LA were reported to degrade in vitro at a rate dependent on the L-LA content and PCL crystallinity [122]. Their PLCL copolymer is a promising replacement for medical applications because of its controllable elasticity and the capacity to change the ε-caprolactone/L-lactic molar ratios and their mechanical properties [120]. Degradable microspheres have gained attention as delivery vehicles for steroids in postmenopausal therapy. Copolymers of ε-CL and D,L-LA have been used to prepare microspheres for prolonged release of progesterone and b-estradiol. The system offered a constant release for up to 40 days in vitro and 70 days in vivo [122]. At a glance, Hitzman et al. developed a respirable microcarrier based on poly-(lactideco-caprolactone) (PLCL) using a spray drying technique for sustained release of 5-fluorouracil (5-FU), which has been extensively studied as a chemotherapeutic agent [123]. Microdialysis was used to determine the release rate of 5-fluorouracil from liposomes, microspheres, and lipid-coated nanoparticles (LNPs) and to study their use as a respirable delivery system therapy of lung cancer for adjuvant (post-surgery). Microspheres may deliver high concentrations of drug for an extended time by controlling their size and porosity. Different systems based on PLCL and PLGA microspheres were compared, and the results showed that PLCL microspheres released 5-FU faster compared with PLGA systems. Copolymers from ε-CL and L-LA can also be interesting in developing alternative release systems of cyclosporine A (CyA) and rapamycine (sirolimus), in which available dosage forms cause a lot of side effects [121]. Li et al. carried out in vitro and in vivo studies about microspheres, loaded with cyclosporin A, based on copolymers of lactide and ε-caprolactone [123]. Cyclosporin A (CyA), a hydrophobic peptide, was incorporated in microspheres based on poly(lactide-b-caprolactone) (P(LA-b-CL), LA/CL (in molar ratio): 78.7/21.3 and 48.1/51.9) and poly(lactide-co-glycolide) (PLGA, LA/GA: 80/20) using the oil-in-water (O/W) emulsion solvent evaporation method. It appeared that CyA can be efficiently incorporated in the microspheres (exceed 96%). Compared with PLGA microspheres, P(LA-b-CL) microspheres released CyA faster (Figure 16a), which can be attributed to the partial crystallization occurring in P(LA-b-CL) microspheres. CyA levels in whole blood were also examined. Compared with PLGA microspheres, poly(lactide-bcaprolactone) microspheres provided a higher blood level of CyA (Figure 16b). In all cases, CyA release can be divided into two different phases: burst release within the first few days and the subsequent sustained release. A one-week subdermal delivery system for L-methadone was developed by Cha et al. [125]. Microspheres containing 13-16% L-methadone were prepared from three biodegradable polymers, PLLA, PGLA, and poly(ε-caprolactone-co-L-lactic acid) (PCL-L,LA), using the solvent evaporation method. L-methadone's release from PCL-LA microspheres (75-85 mol% L-lactic acid) was completed within 48 h. PGLA (80 mol% L-lactic acid) microspheres showed almost as fast release, but 20% of the drug remained in the polymer matrix. Release from PLLA microspheres was subject to a 3-4 day induction period prior to loss of the drug during the next five days. This induction period for PLLA and the direct release of L-methadone from PGLA microspheres were the consequences of an exceptionally large acceleration of the hydrolytic chain cleavage of the polymers in the presence of the basic drug. Microspheres from the copolymers of lactide and ε-caprolactone find an interesting application in the controlled release of steroids, such as progesterone and b-estradiol [126]. These copolymers contained 83-94% of L or D,L-lactide. The influence of the microstructure of lactidyl blocks in the copolymer chains on the drug release rate has been investigated. A more uniform release rate appeared in copolymers derived from D,L-lactide as composed of L-lactide. For the copolymer containing 83-94% of D,L-lactide units, the progesterone and b-estradiol release rate in vitro was found to be practically constant over 40 days. The in vivo studies performed on rats revealed that the period of constant release rate of b-estradiol could be prolonged to about 70 days. Microspheres of poly(DL-lactide-co-caprolactone) (86 mol% DL-lactide) have also been prepared by Kassab et al. in order to incorporate into them nystatin, an antifungal drug [127]. Microspheres were prepared using the oil-in-water (o/w) emulsion solvent evaporation technique. The percentage yield was high, and the drug entrapment efficiency for the dimer depended on the quantity of nystatin in the formulation, while the microspheres were spherical in shape with an average size between 80 and 110 µm. The release profile was slow during the first week, then rapid during the second week to reach a maximum close to 90% for the formulation that contained the highest quantity of nystatin. Moreover, the microspheres were stable, and no degradation was observed during the period of study of two months. Zhu et al. encapsulated ibuprofen in PLCL microspheres (LA/CL: 78.7/21.3 by mole), a non-steroid drug commonly used in the treatment of post-operative, epidural, arthritis, dysmenorrhea, and dental pain [127]. For the preparation of microspheres, the oil-inwater (o/w) solvent evaporation method was used. The results indicated that the drug entrapment efficiency was about 80%. The complete ibuprofen release duration from the microspheres exceeded 1 month. The results showed that ibuprofen was partially crystalline in PLCL microspheres. The results suggested that the copolymer could have potential applications for long-term ibuprofen release. The use of PLA-PCL copolymers for drug microencapsulation is summarized in Table 5. Formulations Based on Star-Shaped PLA or PLGA Several researchers concluded that star-shaped PLA (s-PLA) presents better rheological and mechanical properties than linear PLA [129]. s-PLA has a compact structure with a smaller hydrodynamic radius, lower solution viscosity, increased strength, and peculiar morphologies compared to linear PLA with a similar molecular weight [130][131][132]. These features make s-PLA difficult to get stranded in blood and ensure the bioavailability of the encapsulated drug [133]. Moreover, s-PLA has attracted considerable attention due to its special three-dimensional structures, containing more end-groups that may be treated as functional groups. s-PLA can connect more effectively with targeted molecules, which contribute to the transport of encapsulated drugs to the targeted organs and tissues [133,134]. The features of synthesized s-PLA could modify the pharmacokinetics and biodistribution of drugs, thus improving the efficacy and security of the therapy [135]. s-PLA can be synthesized using ROP of the lactide with multifunctional initiators. The initiator is a critical factor affecting the properties of s-PLA. A wide range of initiating alcohols has been used, such as glycerol and pentaerythritol, to synthesize three-and four-arm s-PLA respectively, di(trimethylolpropane), glucose and xylitol to synthesize five-arm s-PLA [132,133] and Iron (III) tris(dibenzoylmethane) for six-arm s-PLA synthesis [135]. Star-shaped polyesters were synthesized by reacting L-lactide with glycerol, as an initiator, in the presence of stannous octoate or tetraphenyltin as a catalyst. Three different sizes of three-arm s-PLLA microspheres were synthesized, and the degradation rate was substantially promoted by high glycerol content [136,137]. Erythritol was used as the initiator to produce s-PLLA synthesis microspheres loaded with rifampicin (RIF) as the drug model. Different monomer/initiator molar ratios were used, affecting the morphology of microspheres. The increase of monomers led to a perfectly spherical shape and smooth surface microspheres. Furthermore, the increased amount of monomer results in the preparation of microspheres with a bigger diameter and broader size distribution, sustaining the release of the drug for a considerable period of time (70-80% within 180 h) [129]. S-PDLLAs were also synthesized from pentaerythritol (tetrafunctional: 4-armed polymers) and dipentaerythritol (hexafunctional: 6-armed polymers) as polyol initiators. It was shown that s-PDLLAs degrade via a surface mechanism, with slower kinetics than the bulk-degrading linear ones. s-PDLLAs and linear nanoparticles showed equally high loading efficiency and bioavailability of a model drug, atorvastatin [138]. Glucose was also used as an initiator to formulate s-PLGA microparticles delivering octreotide acetate for up to 1 month. This is the only product of the s-PLA family that has reached the stage of commercial exploitation, known as Sandostatin LAR ® Depot [139]. Additionally, s-PLLA was synthesized via ring-opening polymerization of L-lactide, using xylitol, a natural compound, as an initiator to produce microspheres with average diameters between 7 and 15 µm, which could be controlled by varying the s-PLLA's concentration or Mw. Bovine serum albumin loaded in high molecular weight s-PLLA microspheres exhibited an encapsulation efficiency of 10-42%. Drug-loaded microspheres exhibited low burst release and slow-release rates of bovine serum albumin [140]. It was also proved that the degradation rate of s-PLLA increased with the increase of the xylitol molar fraction [141]. Stereocomplex star-shaped microparticles based on PLA (sc-star-PLA) have also been synthesized between enantiomeric poly(l-lactide) (PLLA) and poly(d-lactide) (PDLA) for targeting drug delivery applications. Interactions between l-lactyl and d-lactyl give better mechanical performance, as well as thermal and hydrolysis resistance, a larger amount of drug adsorption, and slower drug release [142][143][144]. Six arm sc-star-PLA microspheres were synthesized by controlled ring-opening polymerization of L-and D-lactide using dipentaerythritol as the initiator. Carboxylic end-groups were introduced onto the sc-star-PLA, using succinic anhydride, which reacted with OH end-groups. Different sizes of microspheres (1, 2, and 4 mm) were produced by varying the PLA concentration, the increase of which led to particles with increased diameter. sc-star-PLA-OH exhibited porous structures with a worm-like morphology, whereas the star-PLA-COOH formed large spherical objects. Stereocomplexation of star-PLA demonstrated a remarkably high thermal stability, increasing the melting temperature by~50-60 • C [145]. The higher thermal stability of sc-star-PLAs than star-PLAs was also confirmed by Satoh and co-workers. They reported an innovative method to synthesize sc-star-PLAs by click coupling of azido-functionalized poly-d-lactic acids (PDLAs) and the ethynyl-functionalized PLLAs possessing 4-, 5-, and 6-arms. However, it was also proved that an increase in the number of arms caused a decrease in the Tm and crystallinity of stereocomplexes [146]. s-PLA has been combined with several polymers to tune some of its properties, such as hydrophilicity, drug encapsulation, biodegradability, etc. Star-branched block copolymers of l-PLA and poly(ethylene oxide) PEO were synthesized using ring-opening polymerization with different LA/EO ratios, which were then utilized to prepare micellar aggregates as drug delivery carriers for 5-FU and paclitaxel. The results showed that the degradation of the micellar form of the star-shaped copolymer was much more rapid than the linear diblock copolymer due to the large hydrodynamic volume of PEO. Moreover, it was found that the increase in the length (and relative amount) of the PLA block slowed down the overall 5-FU and paclitaxel release rate [147]. Furthermore, thermoplastic biodegradable hydrogels based on star-shaped PEO-PLA block copolymers with different numbers of arms were synthesized. Albumin was loaded in star shaped microspheres and showed no significant difference between star-polymers for the first 25 days. However, in the latter phase, 8-arm PEO-PLA showed an increase in Albumin release rate, while 2-and 3-arm PEO-PLA showed a decline. Four-arm PEO-PLA showed an in-between release pattern, demonstrating almost zero-order release kinetics [148]. PEG was also used as a hydrophilic component for the preparation of amphiphilic star-shaped block copolymer micelles. Star-shaped PLLA-PEG micelles were synthesized by three stages of chemical reaction. Ibuprofen, as a hydrophobic model drug, was encapsulated into the s-PLLA-PEG micelles, presenting average diameters of micelles between 105 and 121 nm. The drug loading content and encapsulation efficiency were improved by increasing the Mw of PEG, while the drug release behavior could be controlled with different PEG molecular weights [94,149]. A new type of four-armed star-shaped porphyrin-cored poly(lactide)-b-D-α-tocopheryl polyethylene glycol 1000 succinate amphiphilic copolymer (TAPP-PLA-b-TPGS) was synthesized through an arm-first approach for use in drug delivery. Docetaxel loaded in TAPP-PLA-b-TPGS presented excellent pH-dependent drug-release behavior. They were also found to generate singlet oxygen species and exhibit significant phototoxicity in HeLa cervical cancer cells compared to the commercial drug, Taxotere ® , after irradiation with light of 660 nm wavelength [150]. Star-shaped cholic acid-core polylactide-D-α-tocopheryl polyethylene glycol 1000 succinate (CA-PLA-TPGS) block copolymer was developed for paclitaxel delivery for breast cancer treatment, which demonstrated superior in vitro and in vivo performance with higher antitumor efficacy in comparison with paclitaxel-loaded poly(D,L-lactide-co-glycolide) (PLGA) nanoparticles and linear PLA-TPGS nanoparticles. They also exhibited high stability and showed no change in the particle size and surface charge during 90-day storage of the aqueous solution [151]. The star-shaped PLGA-bcyclodextrin (PLGA-b-CD) copolymer was synthesized by reacting L-lactide, glycolide, and b-cyclodextrin in the presence of stannous octoate as a catalyst. An antitumor antibiotic, adriamycin, was encapsulated within PLGA-b-CD microspheres with a modified double emulsion method. It was found that the decrease in polymer concentration resulted in increasing the particle average size from 135.5 to 325.6 nm. The entrapment efficiency of adriamycin in 220 mm particles was about 65%, exhibiting a high initial burst and a more rapid release rate [135]. Moreover, unimolecular micelles, each of which is formed by a single amphiphilic macromolecule, were developed. Unimolecular micelles do not dissociate upon dilution and are robust to environmental changes in contrast to conventional block copolymer micelles associated with noncovalent interaction. Specifically, star-like block copolymers, PLA-b-PEG sequentially grafted from β-CD via ring-opening polymerization (ROP) and atom transfer radical polymerization (ATRP) reactions, were reported. These star-like amphiphilic polymers can form monodisperse and stable unimolecular micelles in water. The hydrophobic anticancer drug doxorubicin loaded in the PLA shell was efficiently uptaken by tumor cells, demonstrating pH-controlled release behavior [135]. MPs Prepared by PLA/Poly(Alkylene Adipate) Matrices Lately, several other aliphatic polyesters, such as poly(propylene adipate) [PPAd] and poly(butylene adipate)V [PBAd], have drawn attention in the preparation of depot pharmaceutical formulations, offering an interesting alternative in an effort to improve or replace the characteristics of PLGA. PBAd, as a biodegradable, non-toxic, linear aliphatic polyester, presents quick biodegradation and high thermal stability [152]. These properties constitute PBAd as another eco-friendly candidate for use in biomedical applications, especially for drug delivery. In a recent study, block copolymers of PBAd in combination with PLLA were synthesized in particular with PLLA/PBAd ratios of 95/5, 90/10, 75/25, and 50/50, for the first time by Karava et al. The effect of various PLA to PBAd ratios (95/5, 90/10, 75/25, and 50/50 w/w) on the enzymatic hydrolysis of the copolymers showed increasing erosion rates by increasing the PBAd content [153]. The newly synthesized poly(L-lactic acid)-copoly(butylene adipate) (PLA/PBAd) block copolymers were then used as microcarriers for the preparation of aripiprazole (ARI)-loaded long acting injectable (LAI) formulations [154]. Results of the in vitro dissolution studies suggested a highly tunable biphasic extended release for up to 30 days (Figure 17i), which renders the prepared MPs as very promising candidates for new formulations that will probably be able to maintain a continuous therapeutic level for an extended time period and reduced lag-time, as compared to the currently marketed ARI LAI product. SEM images taken after the completion of dissolution proved the crucial role that the amount of PBAd content has to the extent of polyester degradation and consequently the drug release rates. Microspheres with high PBAd content have been extensively eroded (Figure 17ii) compared to PLA microspheres. This is due to the higher hydrolysis rate that PLA/PBAd copolymers have. As can be seen from Figure 17iii, hydrolysis is proportional to the PBAd content, while PLA showed the lowest hydrolysis rate, reaching about 3% within the first six days of testing. Nanaki et al. focused on the synthesis of a series of novel block copolymers of poly(llactide)-block-poly(propylene adipate) (PLLA-b-PPAd) that were further investigated as polymeric matrices for the preparation of naltrexone base (NTX)-loaded microparticle long-acting injectable (LAI) formulations [155]. PPAd is an interesting aliphatic polyester with a low melting point and Tg values, which increases its hydrolysis rate [156]. A naltrexone base is used in the treatment of both drug addiction and alcohol dependence as a specific opioid antagonist. As observed from SEM images, all microparticles had a spherical morphology with smooth surfaces and without any agglomeration formed. However, their morphology after 8 days of dissolution was changed, and from SEM micrographs it was found that their surface became rougher, and the normal spherical shape of the original microspheres was modified. Diffusion was the main mechanism of drug release of these microparticles. In another study, Nanaki et al. studied the preparation of risperidone-controlled release microspheres as appropriate LAI formulations, this time based on a series of novel biodegradable and biocompatible PLA/PPAd polymer blends prepared by a solvent evaporation method [157]. Risperidone, due to its high hydrophobicity, has a very low dissolution rate, which did not exceed 10% release after six days. However, when risperidone was encapsulated in microspheres, its dissolution was enhanced because it was dispersed in the amorphous phase within the polymer matrices. In vitro drug release studies showed controlled release rates in all PLA/PPAd blend formulations, which is directly dependent on PPAd amount. Dissolution results showed that microspheres consisting of neat PPAd release up to 95% of risperidone within the first 3 days of dissolution, while at the same time only 40% of the API is released by neat PLA microspheres. In microspheres prepared from PLA/PPAd blends the dissolution release rates are between neat polymers (i.e., neat PLA and PPAd). By increasing the PPAd amount in PLA/PPAd blends the API release rate also increased. This was probably due to the low melting point and low glass transition temperatures of PPAd compared to PLA. PLA/PAsp Copolymer Poly(aspartic acid) (PAsp) is a hydrophilic fully biodegradable polymer belonging to the family of synthetic polypeptides, insoluble in organic solvents, and can thus be processed only in a hygroscopic powder form or as an aqueous solution. Lately, it has become an attractive candidate for drug carriers. By copolymerization of lactide with aspartic acid, the degradation rate, hydrophilicity, and mechanical and surface properties of PLA can be improved [158]. Tudorachi and his team presented the synthesis of PLA-coaspartic acid copolymers (PLA-co-Asp), which were tested as biodegradable carriers in drug delivery systems [159]. The PLA-co-Asp copolymers were synthesized by a solution polycondensation procedure, using two different molar ratios PLA/L-aspartic acid (2.33/1, 1/1, 1/2.33). Diclofenac sodium, a non-steroidal anti-inflammatory drug, was subsequently loaded into the PLA-co-Asp copolymers. The PLA-co-Asp/diclofenac sodium systems (DDS 1 , DDS 2 ) were prepared by precipitation and solvent evaporation method preparing microparticles, and in vitro drug release experiments were further conducted. It was found that diclofenac can be released much faster from microparticles with lower diameter DDS1 (particles d < 39.4 µm) while a delay was observed from microparticles with high particle size (DDS2) (d < 115.2 µm). Nevertheless, at the end of the dialysis time (356 h), diclofenac sodium released from the DDS1 system was 62.47 wt.% from the whole quantity of drug present in the system, and from DDS2 only 36.09 wt.%. Conclusions Microfabricated systems provide numerous advantages over conventional drug delivery systems. Microcapsules and microspheres are acknowledged as exceptional carrier systems for several drugs and can be tailor-made to adhere to targeted tissue systems. Therefore, microparticle fabrications may find application not only for sustained release, but also for the targeted delivery of drugs to a specific site in the human organism. Although their size can cause some limitations compared to other nanosized vehicles that can aim even at a cellular/subcellular target, microparticles can serve as excellent depot systems for a controlled drug release, and they are able to effectively carry an abundance of (even large) molecules, including polypeptides and proteins. Polymeric microparticles have been extensively investigated and widely used as DDS. The tailorability of their properties (including crystallinity and degradation rates), the low toxicity, the variety of production techniques, and the ease and low cost of their fabrication render them a powerful DDS design strategy against other materials (e.g., inorganic). The utilization of microparticles based on PLA, PLGA, and related polymers and copolymers in the field of sustained release of therapeutics has lately become a prominent field of research due to their excellent biodegradability and biocompatibility. The available methods to fabricate such systems proved to be very versatile. In this direction, enhanced drug release behaviors, different particle size-structure properties, and loading capacities can be achieved by adjusting some of the experimental variables concerning the preparation process. Despite the significant progress that has been made in the area of microencapsulation, many challenges are still ahead. Significant emphasis should be given to the development of cheaper biopolymers for microencapsulation technology and the development of suitable evaluation techniques, especially for bioadhesive microsystems. Thus, the establishment of harmless and efficient systems will require, in the future, in-depth studies of both the technological and biological features of these systems.
17,504.2
2022-02-01T00:00:00.000
[ "Biology", "Chemistry" ]
The mud volcanoes at Santa Barbara and Aragona (Sicily, 1 Italy): A contribution to risk assessment 2 16 The Santa Barbara and Aragona areas are affected by mud volcanism (MV) phenomena, consisting of continuous or 17 intermittent emission of mud, water, and gases. This activity could be interrupted by paroxysmal events, with an 18 eruptive column composed mainly of clay material, water, and gases. They are the most hazardous phenomena and, 19 nowadays, it is impossible to define the potential parameters for modeling the phenomenon. In 2017, two Digital 20 Surface Models (DSM’s) were performed by drone in both areas, thus allowing the mapping of the emission zones and 21 the covered areas by the previous events. 22 Detailed information about past paroxysms was obtained from historical sources and, with the analysis of the 2017 23 DSMs, a preliminary hazard assessment was carried out, for the first time at two sites. Two potentially hazardous 24 paroxysm surfaces of 0.12 km2 and 0.20 km2 for Santa Barbara and Aragona respectively, were defined. In May 2020, 25 at Aragona, a new paroxysm covered a surface of 8,721 m 2 . After this, a new detailed DSM was collected with the aim 26 to make a comparison with the 2017 one. Since 2017, a seismic station was installed in Santa Barbara. From preliminary 27 results, both seismic events and ambient noise showed a frequency of 5-10 Hz Introduction The mud volcanoes (MV) activity is a typical expression of the sedimentary volcanism mainly occurring in the compressive tectonic regimes, along discontinuities for the presence at depth, of under pressure gases or by diapirism phenomena.It consists mainly of a slow and continuous/intermittent uprising of mud, composed of a mixture of saline water, clay and gases (essentially methane and heavy hydrocarbons), from petroleum seepage (natural gas and oil) at depth, to the Earth's surface (Mazzini et al., 2017).In some cases, a violent and instantaneous explosion ("paroxysm") of mud, water and gases could interrupt this activity. In the world, within the 42 geographical areas, as well as Alpine-Himalayan, Pacific and Central Asian folding zones, in the deep-water zones of the Caspian, Black and Mediterranean seas and on the passive margins of the continents, a total of 2508 mud volcanoes and mud volcanic manifestations are present (Aliyev et al., 2015). The largest number of mud volcanoes, including the biggest, most frequently erupting ones and in general, all their known types are located in Eastern Azerbaijan and the adjacent water area of the South Caspian.It is in accordance with these factors, that Azerbaijan region, is considered to be the "Motherland of mud volcanoes".In total, there are 353 mud volcanoes, 199 of which are terrestrial.A complete catalogue of the paroxysm events from 1810 to 2018, for this region, is reported in Baloglanov et al., 2018.According to a detailed study performed by (Mellors et al., 2007), for the mud volcanoes in Azerbaijan, the temporal correlation between earthquakes and eruptions is most pronounced for nearby earthquakes (within 100 km) and with intensities of Mercalli 6 or greater.According to (Bonini et al., 2009), mud volcanoes of the Pede-Apennine margin in Italy, are intimately connected with rising fluids trapped in the core of anticlines associated with the seismogenic Pede-Apennine thrusts. Monitoring the activity of the mud volcanoes, in terms of gas outflow, could be helpful to predict a future paroxysmal event.From geochemical point of view, the monitoring is generally carried out by capturing gaseous emissions at the emitting conduits (Kopf et al., 2010).Sciarra et al., 2016, monitoring the soil gas concentration (222Rn, CO2, CH4), have carried different geochemical surveys in 2006 in the Sidoarjo district (Eastern Java Island, Indonesia).However, this approach is not always effective and applicable, due to logistic difficulties, which make this kind of measurement infeasible and expensive in many contexts.For this reason, several multidisciplinary monitoring approaches have been proposed in different MVs in the world.More recently, Mazzini et al., 2021, have estimated the total CH4 emissions from Lusi using both ground-based and for, the first time, satellite (TROPOMI) measurements; CO2 emission is additionally measured by ground-based techniques.In May and October 2011, it was documented the activity with high-resolution time-lapse photography, open-path FTIR, and thermal infrared imagery (Vanderkluysen et al., 2014). In areas characterized by MVs the gas "bubbling" phenomena can be effectively recorded by geophysical monitoring system, as a local seismic network.Low permeability of clays in mud-volcano areas (Kopf, 2002) suggests that, in the lack of large mud outflow (typical of quiescent phases), gas propagation from the reservoir mainly occurs by the uprising of gas bubbles (Etiope and Martinelli, 2002;Albarello, 2005).Recent researches (Albarello et al., 2012) showed that seismic monitoring could provide useful signals to characterize the activity of mud volcanoes.The seismic signals recorded on the Dashgil mud volcano allowed to model of several transients as a surface effect of resonant gas bubbles in a shallow basin just below the volcano (Albarello et al., 2012).The interpretation of transient events in seismic tremor in terms of bubble resonance suggests a new approach to stimulate gas emissions in the mud volcano. In Italy, the mud volcanoes are clustered in three main geographical zones: in the northern Apennines (mainly in the Emilia Romagna Region); in central Apennines (Marche and Abruzzo Regions); in the southern Apennines (in Basilicata, Calabria and Campania Regions) and in Sicily where 13 mud volcanoes areas are present both in central and western sectors.The sizes and shapes of the Italian mud volcanoes vary considerably.According to (Martinelli et al., 2004), only a small proportion (20%) can be described as 'large' with a surface area >500 m 2 , while only 5% exceed 2 m in height. In Sicily, mud volcanoes are mostly located within Caltanissetta and Agrigento Provinces (S.Barbara and Aragona locations respectively).The name of these phenomena is known as "maccalube" (or macalube), which derives from Arabic and it means, "overturning".In some cases, a violent and instantaneous explosion called "paroxysm" could occur and, the erupted material, consisting of mud breccias composed of a mud matrix with chaotically distributed angular to rounded rock clasts from a few millimeters to meters diameter, could reach a long distance from the emission point.The volume of the erupted materials is generally in the order of tens cubic meters and covers a big portion of the surface.On 27 September 2014 at Maccalube of Aragona two kids died covered by thick erupted mud deposits, during a violent paroxysm.At Santa Barbara village, the last paroxysmal episode occurred in August 2008, causing significant damages to houses, roads, electric and water pipelines. The majority of the mud eruptions occurred in the absence of any earthquake, suggesting that mud volcanoes may erupt in response to a seismic input only if the internal fluid pressure approaches the lithostatic one.A dormancy time is needed for triggering an eruption, related to the production rate of the driving gas to overcome the permeability of the system at depth (Bonini et al., 2009). In this paper, we have gathered some historical information about the pre and post-paroxysmal events that occurred in the past at both study areas as a starting point for a correct hazard assessment. In October 2017, a seismic monitoring station was installed at Santa Barbara, in order to collect some seismic information of the site.Moreover, a number of drone surveys were performed both at Santa Barbara and Aragona.Finally, at Aragona a drone survey has been carried out a few days after the last paroxysm event occurred on 19 th May 2020, with the aim of mapping the surface of the erupted material and estimating volume and thickness. Moreover, a Digital Surface Model (DSM) has been elaborated and the emission points at the Earth's surface were mapped.Based on the DSM analysis and our historical information, two main hazardous paroxysm areas at Santa Barbara and Aragona have been elaborated, in this paper, for the first time. The study areas Santa Barbara and Aragona MVs areas are located in the central and south-west sector of the Sicily Region respectively, inside the Caltanissetta Basin (locations in Fig. 1).These two areas, consisting of Late Miocene to Pleistocene accretionary prism, have been formed simultaneously with the Tyrrhenian Sea opening, during the convergence between the African and Eurasian plates in the Neogene-Quaternary (Catalano et al., 2000b), reaching a deposit thickness of the order of some km.At Santa Barbara, the mud volcanism is located eastward of the Caltanissetta town, near the "Santa Barbara village".The composition of its deposits consists essentially of clay, clayey-marly and sandy composed.Around the main mud emission, in the northern sector, different residential buildings are present which were built mainly in the 60's while, in the southern sector, twenty mono-familiar houses (Fig. 2a).Several public facilities are present at the western side of the mud volcano and, electric pipelines, roads and services for about 4,000 resident people should be considered for a correct risk assessment of the entire area.The Aragona MV area is located about 3.5 km from the town, in the SW direction.The Maccalube of Aragona MV area is a beautiful natural touristic attraction over time and in 1995 has been established Integral Natural Reserve, nowadays managed by Legambiente.The geology of the entire area is mainly characterized by clay deposits, clayey-sands and marls, alternating with sandstone that favour low-relief geomorphology (Fig. 2b).No residential buildings and public facilities are present around the main mud emission area but the site represents a naturalistic attraction for tourists.After the 2014 paroxysm, where two kids died, the entire area was closed. The historical background: a tool for the hazard assessment The Maccalube of Aragona and Santa Barbara have been affected in the past by different paroxysmal events, characterized by violent explosions of gas and mud, which periodically cause the interruption of the normal degassing activity, with a rapid emission of considerable quantities of clayey material and ballistics, accompanied by strong rambles.The paroxysmal activity, reaching a maximum column height of about 20-30 meters is generally, determined by the accumulation and the sudden release of pressurized gases (mainly CH4 with 95-97% vol.) at depth.The volumes of the expelled mud during these events have reached tens of thousands of cubic meters and consequently, after a paroxysmal event, a drastic variation in the morphology occurs.Sometimes, during historical paroxysmal manifestations, the emitted gas giving rise to suggestive manifestations like burning fountains (Grassa et al., 2012).However, MVs do not represent only a relevant geological phenomenon as they also act as elements of hazard.Therefore, the understanding of the occurrence of historic events, together with the intensities of the pre-and post-evidences associated with this phenomenon, could be a useful tool for the Civil Protection authorities in order to define the most probable hazard scenarios for a correct risk assessment in both study areas. The Santa Barbara historical paroxysms The old naturalists and geologists have described the activity of the mud volcano at Santa Barbara, since 1800, reporting some of their major paroxysmal events (Carnemolla, 2017).The first scientific document was produced in 1823 with a manuscript entitled "Descrizione geologico-mineralogica nei dintorni di Caltanissetta" by Gregorio Barnabà La Via, who documented one of the paroxysmal eruption reporting: "[….] on March 5 th , 1823 at 5:25 PM, the wind from the north with strong and broken turbines, the sky being clear, a few dense clouds with long stripes appeared.Five earthquakes occurred in 9 seconds without damages at factories.Going to mud volcano with the Villarosa duke, Luigi Barrile and Livolsi abbot, that observed since 1818 the phenomenon, increasing up to 50 cm the width of the cracks at the maccalube (that were 27 cm) and observing an increasing of the height of the mud volcano with a continuous emission of mud, water and hydrogen sulphide at 2.30 m height […] ". The Livolsi abbot, in his study entitled "Sul vulcano aereo di Terrapilata in Caltanissetta" reported the description of the entire area of the mud volcano: "[…] Its surface is conical in shape, and at first glance offers the appearance of an extinct volcano [...]".According to this manuscript, different paroxysms occurred in 1783, 1817, 1819 and 1823 (Madonia et al., 2011). The intense phenomena have occurred continuously over time, and there is evidence of a significant event that occurred between the years 1930 -40. On August 11 th , 2008, near the village of Santa Barbara, a sudden emission of natural gas occurred, accompanied by the expulsion of large quantities of clayey material, gas and water, reaching a maximum height of about 30 meters.From the morning, the village was affected by intense phenomena of soil cracking causing diffuse damages to civil and industrial buildings.A general uplift of the area around the mud volcano, together with the presence of variable fractures with horizontal and vertical rejections were observed (DRPC report, 2008).During the period just before the paroxysmal event, from December 2007 to August 2008, Cigna et al., (2012) recorded up to 3-5 cm of progressive movements accumulating in the direction towards the satellite with the Satellite-based synthetic aperture radar interferometry method. As a consequence of these phenomena heavy damages to factories, roads, residential buildings and public facilities (water, gas, electricity pipelines) occurred.The Regional Department of Civil Protection forced the evacuation of several buildings both in the southern sector of the mud volcano area at a short distance (hundreds of meters) from the MVs area, as well as at a distance 2.5 km far from the main area, where, a large scale of soil deformations and fracturing occurred (DRPC, 2008). At 16.52 of the same day (11 th August) a paroxysm occurred next to the Santa Barbara village, accompanied by strong rumble and by an about 30 meters column height composed mainly by clayey material, gas and water that covered in seven minutes about 12,000 m 2 of the area with an estimated volume of about 9,550 m 3 (INGV, Report 2008).The maximum width of the deposit was 3.5 meters next to the emission points up to 30 cm in the SE direction reaching a total distance of about 136 m from the main vents.The paroxysmal event lasted several minutes and was anticipated by a telluric event (Madonia et al., 2011) that occurred a few hours before in the whole Terrapelata area and, contemporaneously, in the neighbouring area of St. Anna.According to Madonia et al., 2011Madonia et al., , in august 2008, 5 , 5 earthquakes occurred with magnitudes ranging from 1.7 to 2.4 in the radius of 10-55 km from the sites.After the end of the paroxysm, an increase in the length of the pre-existing fractures occurred.The main pre-and post-historical observations of these events are shown in table 1. The Aragona historical paroxysms The The main pre and post observations of these historical paroxysms at Aragona are showed in table 2. Since 1995, the year of establishment of the Natural Reserve, eight paroxysmal events took place in 1998, 2002, 2005, 2008, 2010, 2012 (Fig.3) (Fig.3) 2014 and the last one occurred on 19 May 2020.Grassa et al., (2012) reported the volumes and the covered areas for each of the first six events.The largest event was in 2005, with an estimated volume of about 19,600 m 3 (Fig. 3B) covering an area of about 16,350 m 2 (Fig. 3A).It is interesting to note that a strong correlation exists between the erupted material and the covered surface areas for the paroxysms that occurred from 1998 to 2012 (no volume data are available for the 2014 paroxysm) as is demonstrated by the high correlation coefficient (R 2 =1) and showed in figure 3C.From the same plot, the 2020 paroxysm event falls far from the general trend previously highlighted covering a smaller surface (approximately a half) rather than the expected one.In our opinion, this could be linked to a different location of the main emissive vent, being 2020 the only one eccentric event, and/or to the different nature of the emitted material. Fig.3. A) Estimated volume and B) interesting surfaces at Aragona mud volcanoes during paroxysmal events. C) Correlation coefficient for erupted volume and interesting surface for the 1998-2012 events (Grassa et al., 2012, modified).In blue the linear correlation with R 2 =1.The red square represents the 2020 paroxysm. Associated hazards at Santa Barbara and Aragona mud volcanoes From the historical information, obtained by the past documentary sources, it is clear and evident that the most hazardous phenomena existing in both areas are the paroxysms. They are quite common, especially at Aragona, and therefore, it is likely to hypothesize that others hazardous events, with the same magnitude or higher, could repeat in the future. In all of the paroxysmal events that occurred in the past, both at Santa Barbara and Aragona (Tables 1-2), diffuse soil fractures and deformations, even at considerable distances from the mud volcanism area, occurred during a pre-paroxysm period.In particular, at Santa Barbara the population has felt several seismic events before the 2008 paroxysm. Another important element that emerges from historical descriptions is that, following the paroxysms, people approaching the mud volcano areas, usually detected a strong acrid smell of gas, reasonably being H2S.It could be lethal to human life if breathed in high concentrations; It is a toxic, corrosive, irritant and colorless gas with the characteristic unpleasant smell of rotten eggs.It can cause chronic diseases of the respiratory organs through prolonged exposure even at very low concentrations; at concentrations of 200-250 ppm it can cause pulmonary edema and risk of death, while at 1,000 ppm it is immediately lethal (NIOSH, 1981). Digital Surface Model (DSM) High-resolution DSM maps of both study areas have been performed in 2017 while, in 2020 only at Aragona MV, with a range of 0.1-0.15m.For these surveys, we used a DJI Phantom III Professional drone (quadcopter) with a mounted 12 Mega Pixel digital camera (Lens FOV 94° -20 mm, Sony Sensor EXMOR 1/2.3",effective pixels resolution of 12.4 M). Before conducting drone mapping, we planned the flight paths and areas for each flight mission.The drone was set to take aerial photographs using "autopilot mode" with a camera facing directly downwards for hilly terrain.The surveys were conducted with the camera mounted 90° sideways.We selected 75% forward and sideways overlap of images. The acquisition of field data requires the determination of several control points on the ground, known as GCPs (Ground Control Points).Therefore, 11 points distributed within the defined area, were recorded using a GPS NAVCOM SF-3040 with angular accuracy of 1 cm. The images were processed with a Structure-from-Motion (SfM) and multi-view stereo approach, in order to produce a high-resolution DSM (Digital Surface Model) and to identify the morphological structures linked to the sedimentary volcanic activity.These approaches allow the geometric constraints of camera position, orientation and GCPs from many overlapping images to be solved simultaneously through an automatic workflow.The image datasets were processed with the software Agisoft Photoscan (Agisoft, 2016).The post-processing of the acquired data merged in GIS software (ArcGIS 10.5), allowed to extrapolate the thickness and the volume of the erupted material, with its reached distance. Hazard assessment In order to define the potential paroxysm hazardous scenarios for both areas, in this paper, we consider the maximum real distances reached by the erupted material over time through the analysis of the high-resolution (12x12 cm) DSM acquired by the drone during the 2017 surveys at Aragona and Santa Barbara areas. At Santa Barbara mud volcano, the erupted material, has reached a total distance along its major axis in the main event of 2008, of about 136 meters while at Aragona, it has reached a total distance of 150 meters.In the 2014 paroxysm event at Aragona, the distance reached by the erupted material was 111 m (Fig. 4).In this preliminary phase, in order to model the potential hazard scenarios, we assumed that both areas, in the next future, will be affected by similar erupted fallout deposits that reaches a maximum distances of 136 m and 150 m for Santa Barbara and Aragona area respectively. For these reasons, starting from our 2017 DSM, we identified the mud volcanoes and bubbling pools in both areas (Fig. 5) as the potential emission points for generating a future paroxysmal event.By using the kernel density tool in ArcGIS 10.5, we defined different clusters maps (Fig. 4), with two main directions, appeared mostly highlighting NW-SE and NE-SW directions at Aragona (Fig. 5b) while, at Santa Barbara, the distribution at the surface seems to be inhomogeneous (Fig. 5a).Secondly, through the elaboration in ArcGis 10.5, we created from each emission point checked in 2017, different omnidirectional buffer circumferences, considering an increase in distance of + 30% with respect to the greatest historical distance reached, due to the creation of the safety limits in both areas.For the hazard assessment, we elaborated 117 and 165 buffer circumferences with a radius of 180 m and 195 m at Santa Barbara and at Aragona respectively (Fig. 6a and b). The final potential paroxysmal hazardous areas, in both areas, are considered as the envelope among the entire buffer circumferences elaborated (Fig. 7). Uncertainties The application of the methodology for the hazard assessement in both study areas, inevitably, is based on assumptions which could give us some uncertainties.At the same time, the absence of a modelling approach for the paroxysm events at both study areas and, the poor availability of data from all the past events, follow a semi-quantitative approach for the hazard definition.The Digital Surface Model elaborated on 2017 was used to calculate, with some uncertainties, in ArcGis 10.5 the maximum distance reached by the erupted fallout materials.The emission points checked in 2017 at S.Barbara and Aragona may change the location over time due to their constantly evolving, also depending on the seasonality, on the weather conditions or to a new deposition of the erupted clay materials. Seismic monitoring activity at Santa Barbara Since October 2017, a seismic INGV station was installed at Santa Barbara (see Fig. 2 for location).It was equipped with a Lennartz 3D-LITE/1s short period velocimeter, with flat response in the bandwidth 1-80 Hz, and a 24-bit seismic data logger RefTek 130 model.To take full advantage of the sensor frequency band, the sampling frequency was set at 200 Hz, while the signals were synchronized via GPS. Paroxysm hazard assessment The hazardous paroxysm areas for both areas were created through the envelope of all buffer circumferences of Fig. 6.An area of 0.12 km 2 and 0.20 km 2 , potentially exposed to possible paroxysmal events was calculated for the Santa Barbara and Aragona site respectively (Fig. 7).In these two hazardous paroxysm areas, different geophysical phenomena as well as deformation, fracturing and seismic events together with geochemical ones could occur.For that reason, these two exposed areas should be interdicted to visitors, residential or public activities, due to their correlated hazardous phenomena that could occur before, during and after a paroxysm event.In both areas, a dedicated safe path, outside the hazardous paroxysm areas of Fig. 7 should be created in order to permit the safety observations of these geological phenomena to visitors. The decreasing of the gas output in the central area of the Maccalube of Aragona before the paroxysmal events could be an important parameter.It may occur, according to Grassa et al., (2012), due to the increasing of the tectonic stress field in the compression regime, generating an overpressure of the interstitial pores fluids at depth while, on the surface, it reduces the permeability of the structural discontinuities along which the gases migrate, thus reducing the outgassing at the surface.The paroxysmal event would occur, according to these deductions, when the gas pressure at depth exceeds the lithostatic pressure resistance opposed by the overlying rocks.The maximum distance reached by the erupted materials, according to our analysis is around 130 meters.The 2020 paroxysm occurred in a medium-high density area of emission points detected from our 2017 survey, where a NE-SW structural lineament has been highlighted (Fig. 5 and Fig. 9).In particular, the eruptive centre for the 2020 event is located, according to our thickness map of Fig 8, where the maximum is recorded (arrow in Fig. 8) and where, in 2017, the emission points were mapped.Nowadays, the 2017 emission points have been buried by the 2020 new erupted material. The seismic monitoring at Santa Barbara Preliminary analysis of the continuous recordings allowed to identify variations in the power of the ambient vibrations, mainly in the frequency range 5-10 Hz, which could be due to changes in the emissions activity.Periods of intense activity have also been observed as shown in Fig. 10.These periods are characterized by numerous micro-events with highfrequency content (several tens of Hz).This micro-seismicity, of clear local origin, appears to have energy/temporal characteristics similar to a swarm, that is comparable energy of events and stable temporal interdistance from seconds to several minutes.Both ambient noise and seismic events show energy in the frequency range 5-10 Hz, with some possible overtones, that could be generated from local resonance phenomena.This activity could be related to the surface effect of resonant gas bubbles, but we cannot rule out the possibility of a deep origin connected to gas flows at the root of the "volcanic" system. Discussion and conclusions In this paper, for the first time, a preliminary hazard assessment of two main mud volcanoes area of Sicily was evaluated. We calculated the hazard scenarios based on the most recent paroxysm events at Santa Barbara and Aragona, in order to define a realistic dimension for a correct risk assessment.It is evident that the hazardous paroxysm areas that we have computed, should be implemented with a probabilistic modelling approach, deriving from the real measured parameters on both areas.For these reasons, it should be important to implement in terms of acquisition frequency as well as the number of parameters, the actual discrete multidisciplinary surveys, with a new technological geochemical and geophysical observatory, in order to minimize the knowledge gaps in these two areas.In light of this, therefore, it is appropriate to realize and maintain a high-frequency multidisciplinary data acquisition system to allow the construction of a forecast model able to best represent the real conditions and, on the basis of which, a monitoring system should be implemented. Nowadays, it is impossible to define "when" the next paroxysm will occur and how much will be intensity.This is because currently there are not enough information to recognize the parameters that could potentially change before a paroxysm as well as a modelling approach of the phenomenon does not exist. In this work, our hazard assessment for the Santa Barbara and Aragona areas, represent a picture of the 2017 survey.The emission points, checked in 2017, could change their location over time.It is therefore appropriate, in the light of this, to monitor the new emission points and fractures in both sites, as potential sources of future paroxysmal events, as demonstrated in 2020 at Aragona where the paroxysm occurred in an emissive point, mapped in our 2017 survey. It is important to underline that we cannot exclude that these paroxysmal events, could occur out of the restricted area in which most of the emission points are located at the surface.At the same time, an update of the actual hazard maps for the two areas must be implemented.However, a better comprehension of the sedimentary volcanism paroxysmal processes is needed, with particular reference to their hazard assessment; it is certainly important in a next future, to build a paroxysmal events catalog in order to be able to apply advanced assessment approaches such as the one proposed by Mellors et al, (2007). From hystorical informations, we know that different phenomena could occur before a paroxysm in the mud volcanoes areas, in particular deformations, soil fractures and increasing of seismicity. After the paroxismal event, according to the hystorical descrisptions, a strong smell of acrid gas reasonably H2S is recorded.H2S, if breathed in high concentrations, could be lethal to human life.It is a toxic, corrosive, irritant and colorless gas with the characteristic unpleasant smell of rotten eggs.It can cause chronic diseases of the respiratory organs through prolonged exposure even at very low concentrations; at concentrations of 200-250 ppm it can cause pulmonary edema and risk of death, while at 1,000 ppm it is immediately lethal (NIOSH, 1981). Since October 2017, a short period seismic station was installed in Santa Barbara site.The continuous monitoring and the preliminary analysis of the acquired signals allowed to highlight variations in the power of environmental vibrations. Moreover, the presence of periodic micro-seismicity, likely due to linked variation in emissions and bubbling activity, was detected.However, the use of a single station does not allow a complete characterization of the seismic activity, for which the creation of a micro-network would be desirable.Continuous monitoring of local microtremor and microseismicity, in particular before and during a paroxysmal event, could allow us to understand the source mechanisms of these events and propose useful predictive models for risk reduction. Only with the installation of a multidisciplinary geochemical and geophysical observatory at the two study areas, we could speculate to discriminate the "potential" phenomena that could occur before, during and after a paroxysm event. For these reasons, different geochemical and geophysical parameters will have to be analysed, verified and validated in the next future. It could be a useful tool for Civil Protection Authorities in order to take the appropriate risk mitigation measurements for the exposed people.A safety path outside our hazardous detected areas should be considered by the local administrations, in order to reduce the risk.Our hazardous paroxysm areas, in both sites, finally should be forbidden to visitors, expecially during the period where high deformation, fractures and seismicity occur. Fig 2 . Fig 2. Location of the two mud volcano areas: Santa Barbara (A) and Aragona (B).Image of ArcGis 10.5, ESRI. activity of the Maccalube of Aragona, according to Greek, Roman and Arab historical evidences, has occurred at least for 2,500 years.The cosmetic and therapeutic use of the mud, emitted from these geological manifestations, has been reported by Platone, Aristotle, Diodoro Siculo and Plinio.In 1777, the first big mud eruption (today called paroxysm) has been documented byAbruzzese (1952), reporting: " […] In the early hours of September 29 th , the inhabitants of the neighbouring felt a strong shaking of the ground and observed a copious mud flow from the craters up to different heights".Furthermore, the Ferrara abbot described the same paroxysm as one of the most violent eruption known: "[…] On the September 29 th they heard before a roaring noise in all the surroundings.The ground shaking around a great chasm formed up a few miles[…] an enormous column of mud rose up to almost a hundred feet high, having been abandoned by the force that pushed it upward[…] the terrible explosion lasted half an hour, then calmed down, but recovered after a few minutes and intermittently continued all day but the smoke lasted all night.In all the time of the phenomenon the very strong smell of hydrogen sulphide gas was felt at a great distance in all the surroundings.An unknown author reports the same eruption on 30 th describing: "[….] on September 30 th 1777, after half an hour when the sun had risen, a murmur was heard in the above mentioned place, which, momentarily advancing, surpassed the roar of the strongest thunders.The earth begins to tremble, and shows the deep cracks, which widened more than usual to ten palms, the main crater, from where the clay and the murky water emerged perpetually, like a cloud of smoke, although somewhere it was flame-colored[….]thiseruption lasted for half an hour, and, with a quarter-hour interval, replied three more times.The next day, the clay material emitted, however, appeared at the natural consistency, in such a way that it allowed the curious to approach the mud volcano.The clay material erupted still retained the smell of sulfur, which more penetrating was felt during the eruption."OnOctober 19 th , 1936, at 5, some of Aragona and Giancaxio neighbor villages heard two rumbles, like thunders, which had followed one another in a short period of time.A violent explosion destroyed the central part of the Maccalube from where an imposing fountain of mud raised, which in its ascent dragged blocks of marl mixed with sandstones and gypsum.This fountain reached ten to fifteen meters in height.Only at the sunrise the people noticed that a large black mass had covered the place where the mud volcanoes are located for about 2 hectares.From the surveys data detected by Prof. Ponte and Prof. Abruzzese, […]since February 1935 there were the presence of a soil fracture extending for about 400 m to E direction, then distancing 600 m towards the W. In March 1935, at the proximity of the fracture, several mud volcanoes arose, some of which reached a height of one meter. Fig. 5 . Fig.5.Density maps of the potential emission points investigated.Red: high-density values; Yellow: low-density values.A) Santa Barbara MV area and at B) Maccalube of Aragona.(Source: 2017 DSM's in ArcGIS 10.5) Fig. 8 . Fig.8.2020 Thickness map for the erupted materials, due to the paroxysm event of May 19 th .Inside the white square, the emission point detected in 2017, corresponding to the main centre for the 2020 paroxysm.(Source: 2020 contour map in ArcGIS 10.5) Fig. 9 . Fig.9.Density maps for the 2017 emission points (Red: High density; yellow: low density).The covered surface area for the 2014 and 2020 paroxysms is shown with red and grey lines respectively.In the white square, the 2017 emission points, likely responsible for the new 2020 paroxysm event.(Source: 2017 DTM's in ArcGIS 10.5) Fig. 10 . Fig.10.Example of micro-seismicity record by the seismic station installed at Santa Barbara: (a) time signal relative of some minutes of the vertical component (velocity) record and (b) zoom on a single waveform with relative spectrogram (c) and amplitude spectrum (d).The spectrogram allows highlighting the presence in the ambient noise of a continuous energy band in the frequency range 5-10 Hz and some possible overtones.The same frequencies can be identified in the amplitude spectra of the micro-events, suggesting a possible link to local resonance phenomena. Table 1 . Pre and post observation of the historical paroxysm events at Santa Barbara. Table 2 . Pre and post observation of the historical paroxysm events at Aragona.
7,907.2
2021-11-10T00:00:00.000
[ "Environmental Science", "Geology" ]
dbWGFP: a database and web server of human whole-genome single nucleotide variants and their functional predictions The recent advancement of the next generation sequencing technology has enabled the fast and low-cost detection of all genetic variants spreading across the entire human genome, making the application of whole-genome sequencing a tendency in the study of disease-causing genetic variants. Nevertheless, there still lacks a repository that collects predictions of functionally damaging effects of human genetic variants, though it has been well recognized that such predictions play a central role in the analysis of whole-genome sequencing data. To fill this gap, we developed a database named dbWGFP (a database and web server of human whole-genome single nucleotide variants and their functional predictions) that contains functional predictions and annotations of nearly 8.58 billion possible human whole-genome single nucleotide variants. Specifically, this database integrates 48 functional predictions calculated by 17 popular computational methods and 44 valuable annotations obtained from various data sources. Standalone software, user-friendly query services and free downloads of this database are available at http://bioinfo.au.tsinghua.edu.cn/dbwgfp. dbWGFP provides a valuable resource for the analysis of whole-genome sequencing, exome sequencing and SNP array data, thereby complementing existing data sources and computational resources in deciphering genetic bases of human inherited diseases. Introduction The identification of genetic variants responsible for human inherited diseases is one of the major tasks in medical and human genetics (1). With the evolution of the next generation sequencing technology, it becomes more and more feasible to sequence all genetic variants in the entire human genome with low-cost in a short period of time (2,3), making whole-genome sequencing a reality in the study of human inherited diseases. Whole-genome sequencing can typically detect much more genetic variants than the traditional SNP array technology, and many sequenced SNVs occur in low frequency V C The Author(s) 2016. Published by Oxford University Press. Page 1 of 11 This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted reuse, distribution, and reproduction in any medium, provided the original work is properly cited. (page number not for citation purposes) or de novo. For example, single nucleotide variants (SNVs) detected for an individual in the 1000 genomes project is about 4 million on average, about 3-4 times more than those detected by the Affymetrix GeneChip genome-wide human SNP array 6.0. Among these SNVs, about 29% occur in low frequency (<1%). These properties, together with the fact that the number of patients and normal individuals in a whole-genome sequencing study is typically small, prohibit the direct application of such statistical genetics approaches as genome-wide association (GWA) studies (4)(5)(6) to the analysis of whole-genome sequencing data. The recent advancement in exome sequencing studies (7,8) has shown that the analysis of functionally damaging effects could be a powerful way in the identification of disease-causing SNVs (9,10). For example, we have previously demonstrated that the integration of multiple functional scores of nonsynonymous SNVs and association scores of genes hosting these SNVs by a carefully designed statistical model is effective in pinpointing pathogenic SNVs for autism, epileptic encephalopathies and intellectual disability (11,12). However, a majority of SNVs in whole-genome sequencing studies occur in non-coding regions, and there still lacks a repository that collects functional predictions and annotations of such variants. These facts have greatly restricted the scope of functional analysis of whole-genome sequencing data. Therefore, an urgent demand in whole-genome sequencing studies is to construct a database that collects functional predictions and annotations for the large number of sequenced SNVs. There have been dozens of computational methods for predicting functionally damaging effects of nonsynonymous SNVs that occur in protein coding regions, with examples including but not limited to SIFT (13), PolyPhen-2 (14), MutationTaster (15), MSRV (16), SinBaD (17) and many others (18,19). Whole-exome predictions of these methods have also been collected in such databases as dbNSFP (20). For SNVs occurring in non-coding regions, conservation information based on multiple sequence alignment or polygenetic trees, such as GERP þþ (21), SiPhy (22), PhyloP (23), serves as a major feature for characterizing functional implications of SNVs. With the growth of functional annotations of the human genome, large-scale efforts have also been made to interpret the functional non-coding variants. For example, two leading algorithms, Combined Annotation-Dependent Depletion (CADD) (24) and Genome-Wide Annotation of VAriants (GWAVA) (25), have extended their functional predictions to non-coding variants by integrating various genomic and epigenomic annotations. Different computational methods have their own strength and weakness, due to the reason that they use different annotations, adopt different statistical or machine learning models, and are trained with different training data. Therefore, a more comprehensive way for assessing functional implications of SNVs is to use prediction results of multiple methods to make more reliable inference. With this understanding, we developed dbWGFP, a database of whole-genome single nucleotide variants and their functional predictions. In this database, we collected nearly 8.58 billion possible human whole-genome SNVs. For each SNV, we collected 32 functional prediction scores calculated by 13 methods, 15 conservation features derived from 4 approaches, 1 sensitivity measurement and 44 valuable annotations obtained from the ENCODE project. We further compiled a cross-platform program to enable ultrafast search of this database and offered user-friendly web services and free downloads at http://bioinfo.au.tsinghua. edu.cn/dbwgfp. Methods dbWGFP provides a well-designed database that contains 48 functional prediction scores and 44 valuable annotations for nearly 8.58 billion human SNVs. The overall structure of this database is shown in Figure 1. To meet demands of different research purposes, we offer two versions of this database. In the lite version, we only include in the database basic information of SNVs and their functional prediction scores (Table S1). In the full version, we further include annotations extracted from dbSNP (26), CADD (24), the ENCODE Project (27) and the 1000 Genomes Project (28) (Table S1). Single functional predictions have their own advantages and limitations in the scope of usage and the prediction power for different types of variants. For example, PolyPhen-2, as one of the most accurate methods for predicting functional effects of nonsynonymous SNVs, is restricted to dealing with variants located in protein coding regions, because this method calculates functional implications of SNVs based on protein sequence and structure. phastCons adopts a phylo-Hidden Markov Model (HMM) to detect conserved elements and provides a measure of conservation for nearly all possible SNVs. However, this method lacks the support of functional evidence of variants and overlooks relative importance of variants in the process of transcription and translation (21). On the other hand, current applications appeal for functional predictions of not only high accuracy but also high coverage. For example, in the widely used strategy for analyzing exome sequencing data, functional prediction scores are used to filter out variants not likely to be causative. However, exome sequencing can lead to variants in not only protein coding regions but also such flanking regions as promoters and splice sites. With this understanding, we try to construct a database that includes as many functional prediction scores and annotations as possible. Specifically, we collect functional prediction scores that meet two standards. First, the method for calculating a prediction score should be formally published. Second, the method should provide a website for downloading pre-calculated scores or a software package for calculating scores. With these criteria, we collected 48 functional prediction scores that were derived from 17 methods. Among them, scores of MSRV and SinBaD were calculated by using their software, and the other scores were downloaded from websites. Collection and curation of SNVs We collected all possible SNVs in the human genome by integrating those occurring at least once in dbSNP (26), dbNSFP (20,29) and CADD (24). By doing this, we obtained a total of 8 576 251 873 human SNVs based on the GRCh37/hg19 reference. We then extracted annotations for these SNVs from dbSNP, CADD, the ENCODE Project and the 1000 Genomes Project, including consequence type, corresponding codons and genes, allele frequencies, positions, distance to splicing site and many other properties. (21) and SiPhy (22). The only sensitivity measurement describes subgroups of non-coding categories that share almost the same selective constraint as coding genes (34). Details about these functional prediction scores are summarized in Table 1 and described briefly as follows. Extraction of functional prediction scores We extracted SIFT, PolyPhen-2, LRT, MutationTaster, Mutation Assessor, FATHMM, RadialSVM and LR scores from the dbNSFP database (Version 2.4). These scores measure functional changes of the encoded protein for a nonsynonymous SNV, whose occurrence may results in the change of amino acid and potentially affects protein structure and function. Briefly, SIFT takes advantage of the position-specific probability estimation using PSSM with Dirichlet priors to estimate whether the altered amino acid affects protein function (13). The smaller the SIFT score, the more likely the SNV could destroy the function of the protein. PolyPhen-2 calculates a set of features for a SNV based on the encoded protein sequence and protein structure, and trains a na€ ıve Bayes model coupled with entropybased discretization to identify the structural and functional effect of the SNV (14). Based on the null hypothesis that each codon is evolving neutrally with no difference in the rate of nonsynonymous to synonymous substitution, LRT adopts the log likelihood ratio of the conserved relative to neutral model to predict the deleteriousness of a SNV (18). Similar to SIFT, the smaller the LRT score, the more likely a SNV would destroy the function of the protein. MutationTaster computes a large number of sequence-based features and trains a na€ ıve Bayes classifier to predict the potential deleterious nonsynonymous SNVs (15). Due to evolutionary conservation of the affected amino acid in protein homolog, Mutation Assessor evaluates the functional effect of the SNV resulting in the amino acid change (19). Mutation Assessor can predict both somatic mutations discovered in cancers or missense SNVs. FATHMM relies on a hidden Markov models to predict the functional, molecular and phenotypic effect of missense variants or cancer-associated variants (32). RadialSVM and LR are merged prediction scores that are derived by using SVM and logistic regression respectively to integrate 10 existing prediction scores (SIFT, PolyPhen-2 HDIV, PolyPhen-2 HVAR, GERP þþ, MutationTaster, Mutation Assessor, FATHMM, LRT, SiPhy, PhyloP) and the maximum allele frequency in the 1000 Genomes Project (29). We downloaded the Grantham and CADD scores from the CADD website (Version 1.0). The Grantham score indicates differences of physicochemical properties between amino acids, and the larger the difference score, the more likely a SNV would destroy the function of the host protein (30). The CADD score is obtained by integrating annotations from Ensembl Variant Effect Predictor (VEP) (35), ENCODE Project (27) and UCSC Genome Browser tracks (36) to prioritize whole-genome functional variants. CADD provides two types of prediction scores: the raw score with high resolution and the scaled score that is easier to interpret and comparable across different CADD versions or models. We downloaded GWAVA score from its website (25). This method predicts the functional effect of non-coding genetic SNVs based on sequence-based properties and a large set of annotations of non-coding elements from the ENCODE and GENCODE projects (37). We downloaded PhastCons and PhyloP scores from the UCSC Genome Browser. PhastCons uses a hidden Markov model to predict the probability that a SNV belongs to a conserved element based on the multiple sequence alignment of the human genome against other species (33). PhyloP computes an exact p-value under a continuous Markov substitution model to estimate the interspecies conservation for each SNV (23). We downloaded SiPhy scores from the public ftp site of the Board Institute. SiPhy takes advantage of rigorous statistical tests to identify bases under selection constraint based on multiple sequence alignment with 29 mammals. SiPhy also estimates stationary distribution of different nucleotides at a site (22). GERP adopts maximum likelihood evolutionary rate estimation to calculate position-specific estimates of evolutionary constraint (38). GERP þþ, an advanced version of GERP, uses a more rigorous set of algorithms to calculate position-specific 'rejected substitutions' scores and to indentify evolutionarily constrained elements (21). GERP þþ neutral evolution scores, rejected substitution scores, element scores and element p-values were all downloaded from the GERP website. We calculated MSRV and SinBaD scores by using software packages provided by these methods. Briefly, MSRV applies an ensemble learning approach with a set of 24 physiochemical properties and 2 conservation scores to prioritize disease-causing nonsynonymous SNVs (16). SinBaD adopts a logistic regression model with 90 binary features obtained from multiple sequence alignment (17) to quantitatively measure functional effects of mutations in not only protein coding regions but also promoter regions and introns. Extraction of annotations In the full version of dbWGFP, we further collected 44 useful information or annotations from dbSNP, CADD, the ENCODE Project and the 1000 Genomes Project. First, we included basic information for each SNV and its corresponding codons and genes, including reference SNP ID, ancestral base, annotation type, consequence type of the variants, ENSEMBL gene ID, ENSEMBL transcript ID, CCDS ID, gene name, protein accession number and ID in the UniprotKB database (39), reference codon, reference and substituted amino acids. Second, we extracted from the 1000 Genomes Project related annotations, including the validated status, project phase, common variant or not, and different types of allele frequency for different type of populations. Finally, we included from CADD or the ENCODE Project such annotations as distance to the closest Transcribed Sequence Start (TSS), distance to the closest Transcribed Sequence End (TSE), amino acid position, codon position, base position from transcription start, relative position in transcript, base position from coding start, relative position in coding sequence, distance to splice site, closest splice site is ACCEPTOR or DONOR, total number of exons, and total number of introns. Coverage and correlation of functional scores Different types of functional scores are designed for different types of variants. For example, SIFT and PolyPhen-2 can only predict the deleteriousness of nonsynonymous SNVs, while CADD and GERP þþ can give estimations of functional effects for SNVs across the whole genome. Therefore, we summarized the coverage of each functional effect score for each chromosome in Table 2. From the table, we can see that CADD and the four conservation scores have high coverage, while functional prediction scores designed only for nonsynonymous SNVs, including SIFT, PolyPhen-2, LRT, MutationTaster, Mutation Assessor, FATHMM, RadialSVM, LR, MSRV and SinBaD, have low coverage. Different types of prediction methods give different functional effect scores for the same SNV. Therefore, we checked pairwise agreement between different prediction scores for SNVs occurring in chromosome 22 by using the Spearman's rank correlation coefficient, and we summarized the results in Figure 2. From this figure, we can see that most prediction scores have medium to high correlations with a few other scores. For example, prediction scores of MSRV are highly correlated with those of SinBaD, and scores of PhastCons are highly correlated with those of GWAVA. Nevertheless, there also exist some scores (e.g. MutationTaster) that have low correlations with the others. Comparison of the prediction power between different scores dbWGFP contains 15 conservation features derived by 4 conservation calculation approaches and 32 functional features calculated by 13 popular functional prediction methods. Seven of these methods (phastCons, PhyloP, GERP þþ, SiPhy, Grantham, CADD and GWAVA) intend to provide prediction scores for variants spreading over the whole genome. The other ten methods (SIFT, PolyPhen-2, LRT, MutationTaster, Mutation Assessor, FATHMM, RadialSVM, LR, MSRV and SinBaD) only focus on variants in protein coding regions. In order to obtain a comprehensive understanding about the prediction power of these methods, we collected a set of disease-causing SNVs from the HGMD database and a set of neutral SNVs from the 1000 Genomes Project. The disease-causing variants, used as positive cases, were further partitioned into 52 007 protein coding SNVs, 8822 splicing SNVs and 1811 regulatory SNVs. Accordingly, the neutral variants, used as negative controls, were also partitioned into 272 534 protein coding SNVs, 2897 splicing SNVs, and 701 984 regulatory SNVs. For each of these variants, we extracted the conservation scores and functional scores from the dbWGFP database, obtaining a total of 17 scores. Focusing on scores that cover at least 5% of SNVs in a category. We first performed a t-test to see whether a prediction score is significantly different between positive and negative SNVs. Results, as shown in Table 3, suggest that all the 17 scores are significantly different between the two class of SNVs in protein coding regions. For SNVs in splice sites, only 8 scores cover 5% or more SNVs. Within these scores, SinBaD has the highest power in discriminating disease causing variants against neutral ones. For SNVs in regulatory regions, only 5 scores cover 5% or more SNV, and GWAVA has the highest discriminant power. We then explored the ability of each score in predicting disease-causing SNVs. For this purpose, we varied the decision threshold for a score and calculated the sensitivity and specificity at each threshold value. Here, the sensitivity is defined as the fraction of positive SNVs whose scores exceed a threshold, and the specificity is defined as the fraction of negative SNVs whose scores do not exceed a threshold. We then plotted the receiver operating characteristic curve (sensitivity versus 1-specificity) and calculated the area under this curve (AUC). Results, as shown in Table 3, suggest that the performance of different methods is quite different. For SNVs in protein coding regions, LR has the highest performance, followed by RadialSVM, FATHMM, MSRV and CADD, respectively. For SNVs in splice sites, SinBAD outperforms all the other methods. For SNVs in regulatory sites, GWAVA has the highest performance. This comprehensive comparison of the prediction power between different scores therefore provides insightful understanding in the determination of suitable prediction scores in real applications. Overall, the prediction of disease-causing SNVs in splice sites and regulatory regions are much harder than that in protein coding regions, because the AUCs of the former two categories are typically much lower than those of the later class. Such an observation suggests the urgent demand of developing an effective computational tool for predicting functionally damaging effects of variant in non-coding regions. The coverage of different types of prediction scores varies significantly, resulting in the missing data problem. To address this problem, we propose the following three methods. First, users can completely ignore missing data and only focus on scores of complete information. Second, users can adopt a statistical or machine learning approach that can easily handle missing data. Fisher's method and Software dbWGFP offers a user-friendly web interface to facilitate the access of the database. The web interface provides two main components: a query service for retrieving functional prediction scores and annotations of SNVs in different data formats and a download service for setting up a local version of this database. In the step-by-step mode of the query service, users can upload a file containing query variants and retrieve results online. In the batch query mode, users can upload a file containing query variants and an email address. A URL of the query results will then be send via email. dbWGFP provides two versions for downloading. The lite version includes prediction scores of human whole-genome SNVs. The full version includes both prediction scores and annotations. Both versions include a search program that can retrieve predictions and/or annotations in a highly efficient way. Different versions of dbWGFP are also archived for easy access. Ultra-fast search program Sequentially scanning dbWGFP to retrieve a query SNV is prohibited due to the huge number of SNVs collected in this database. Therefore, we developed a highly efficient search program to enable ultra-fast locating of a SNV in the database. In order to test the speed of the search program, we selected an individual (HG00096) at random from the 1000 Genomes Project, extracted a total of 3 844 226 SNVs occurring in the whole genome of this individual, and applied the search program to retrieve predictions and annotations from dbWGFP. The results are summarized in Table 4. From the table, we can see that for the lite version of dbWGFP, our search program, when using 8 threads simultaneously, can efficiently deal with queries at the speed of 4999 SNVs per second, and it takes only 769 s to obtain functional predictions for SNVs spreading across the whole genome of a human. For the full version of dbWGFP, our search program can efficiently deal with queries at the speed of 3647 SNVs per second, and it takes only 1054 s to obtain both functional predictions and annotations for SNVs spreading across the whole genome of a human. We also notice that the running time for taking all variants as a single query file is significantly shorter than the summation of running time for taking individual chromosomes as separate query files. This phenomenon is due to the fact that multiple threads read separate database files for different chromosomes in the former case. Hence, we suggest users combining their data for different chromosomes into a single query file to maximize the search performance (Table 4). a file containing query SNVs and then check the web site for results. In the batch mode, a user can upload an archive including query SNVs and an email address, and then check email for results later. In either mode, a query file typically includes multiple lines, each of which is given in one of the following four formats. First, a query line can be given as two column text ('chr pos'). In this case, the server locates the query position in the query chromosome and output predictions and annotations of all possible SNVs in the query position. Second, a query line can be given as three column text ('chr pos ref'). In this case, the server locates the query position in the query chromosome and output predictions and annotations of all possible SNVs that occur in the query position and whose reference nucleotide is identical to the query. Third, a query line can be given as four column text ('chr pos ref alt'). In this case, the server outputs predictions and annotations of the SNV defined by the query. Finally, a query line can be given in vcf format. In this case, the server also outputs predictions and annotations of the SNV defined by the query. Considering that in a real whole-genome sequencing study, the number of SNVs is typically huge, the dbWGFP web service also accepts input files compressed in gz, bz2, zip or rar formats. Similarly, output files are also given in these compressed formats. Download service The download service allows a user to download parts or the entire dbWGFP database. For both the lite and the full versions, we partitioned SNVs according to chromosomes and provided files compressed in gz format for individual chromosomes. We further generated a single compressed archive file for each version. Conclusions and discussion In this paper, we have introduced dbWGFP, a database and web server of human whole-genome single nucleotide variants and their functional predictions. This database collects nearly 8.58 billion possible SNVs across the whole human genome, with each SNV described by 48 functional prediction scores and 44 valuable annotations. To the best of our knowledge, dbWGFP is the first large-scale comprehensive database for functional predictions and annotations of human whole-genome SNVs. This database can not only be helpful in the capture of causative variants from massive candidates derived from whole-genome or exome sequencing data, but also provide a valuable resource in the study of human genetic variants. For example, after sequencing the whole genome of one or a few patients, a bunch of candidate SNVs can be extracted from the sequencing data. Given all the candidate SNVs as input, dbWGFP can be used to effectively collect functional prediction scores and annotations for each candidate SNV. Based on these scores and annotations, researchers could filter out a large set of neutral SNVs that are believed to have little functional effect, and obtain the remaining functional SNVs for further study. Similarly, dbWGFP can also be used in the analysis of exome sequencing or SNP array data, thereby complementing existing data sources and statistical methods in deciphering genetic bases of human inherited diseases.dbWGFP can be further improved from the following aspects. First, currently computational methods for predicting functional effects of whole-genome variants are still quite limited, since scientists just begin to make such efforts recently. As more prediction approaches become available in the near future, more available functional prediction scores can be incorporated into our database. Second, important gene annotations and protein annotations can also be included in our database. These annotations may include but not limited to gene annotations from Gene Ontology (40), protein-protein interaction network from STRING (41), pathway information from KEGG (42) and many others. Third, phenotypic properties for human whole-genome SNVs can also be included in our database. These properties can be extracted from existing databases such as OMIM (43), HGMD (44) and COSMIC (45). The inclusion of such phenotypic information may further improve the inference of causative variants for human inherited diseases, as we have done in our previous studies for prioritizing candidate genes (46). Finally, although we focus on single nucleotide variants in the current release of dbWGFP, it is obvious that other types of variants such as small insertion or deletion can also be included in the future.
5,875.8
2016-03-17T00:00:00.000
[ "Biology", "Computer Science" ]
Active business objects (ABO): when agents meet ABC/ABM based management The paper studies a new paradigm for building and using business information systems, accompanied by the required enabling technologies and business models, which will assimilate innovative marketplace concepts and thus allow for better process/activity oriented management. The proposed approach is inter-disciplinary (technology and management) and relies on combining the mobile object (agent) technology with the ABC/ABM model (Activity Based Costing/Management) toward what could be called "real time management". The core idea concerns the deployment of active business objects, which may form an essential part of the emerging global business infrastructure. In the ABO approach we use agents to encapsulate business objects which become mobile active software entities, holding all the necessary data (i.e., the business objects) and code (i.e., the behaviour) to take action according to the different situations that can occur. Introduction Currently, business information and data are communicated across seamless and interweaven information supply chains among the involved parties within business-to-business or business-toconsumer operations and relations.Furthermore, the management tools used do not focus on the raw organizational material, which are the business processes composed of activities involving resources be they human, structural or material.Business processes analysis and quantification is crucially lacking in most organizations.As a result, performance is often measured by using legacy tools and techniques, which are outdated and do not reflect the core business activity.These tools often fail in providing the necessary indicators that should ultimately provide the managers with "real time" information allowing sound and accurate management. This paper aims at studying a new paradigm for building and using Business Information Systems, accompanied by the required enabling technologies and business models, which will assimilate innovative marketplace concepts and thus allow for better process / activity oriented management.The core idea concerns the deployment of Active Business Objects, which may form an essential part of the emerging Global Business Infrastructure.The proposed approach is inter-disciplinary (technology and management) and relies on combining the mobile object (agent) technology with the ABC/ABM model (Activity Based Costing / Management) towards what could be called "real time management". The mobile object or agent technology provides a new programming paradigm for network oriented applications allowing to move computation in a flexible and cost effective way towards the source of data.In the scope of the ABC/ABM model agents can be used to capture essential notions such as business processes and activities.By encapsulating business objects and their corresponding behavior into Agents, the resulting Active Business Objects become part of processes that traverse organizations, travelling across the various activities required to successfully fulfill their tasks.Furthermore, they become information providers able to release relevant information about their own usage, execution and state to the information system and henceforth to management instrument panels. The ABC/ABM methodology provides a conceptual framework to measure the resource consumption in the production of a service or product and consequently allow for efficient organization and strategic management through management instrument panels in a airplane cockpit metaphor.It is based on the analysis of the tasks, activities and processes that are involved in the production, freed from the apparent organizational structures.In this context it becomes possible to trace the real consumption of resources for the production of a product, based on the axiom that products/services make use of activities which in turn consume resources.This by opposition to classical costing approaches that have proven to be inefficient as they are based on arbitrary allocation standards which do not reflect the real resource consumption. Section 2 provides an introduction to Activity Based Costing and Management (ABC/ ABM) and how it relates to organizational controlling and management before describing how such a methodology can be used at an operational level.In section 3 we present the mobile object (agent) paradigm as an enabling technology for ABOs.Section 4 describes the Active Business Object framework resulting from the combination of the ABC/ABM model and the mobile object (agent) paradigm.Finally section 5 concludes with open issues and future work. ABC / ABM as a Methodological Framework The necessity of controlling organizations appeared a long time ago and lead to the development of sciences such as management.The instrument which most rapidly and logically emerged was cost calculation.Although the real development of costing occurred during the 19th century (Ecole suppérieure de commerce de Paris, 1866) the concept originates by far before according to Johnson and Kaplan [1]. The development of industry, the scientific organization of work (20th century with Taylor) and finally new forms of organization reveal the need for a new type of management.Being able to define a cost in a precise way (arithmetic) is not enough any more.Now it is necessary to guide and orient the behavior of the persons who are in charge of financial results (i.e., managers, decision makers, etc).The General goal of management controlling is to verify the adequation between resource consumption and the corresponding output obtained in return.The concept of management instrument panel or control panel as well as a broader dimension of management control is emerging with a particular consequence: the criticism of too simplistic costing approaches and the need for strategy oriented tools.In this context, the eighties saw the development of Activity Based Costing and Activity Based Management (ABC/ABM). Activity Based Costing and Management (ABC/ABM) is an interesting methodology in that it allows, at least in a partial way, to escape from the strict accounting and financial vision of management.In particular, ABC addresses the issue of measuring whereas ABM is concerned by the means of action on the relation between resource consumption and result through management instrument panels or control panels. Origins and Concept The ABC/ABM method initially appeared in the United States towards the end of the eighties consequently to work done by a group of the Consortium for Advanced Manufacturing -International (CAM-I) [2].The three following elements explain the context and the need for this method: • increase of indirect costs in most sectors, both in relative and absolute value • change in the nature of indirect costs.Indirect costs are being increasingly composed of costs influenced by product complexity, diversity and quality to the detriment of variables linked to the production volume. • Evolution of direct labor.Like the evolution of indirect costs, the proportion of direct labor cost in the total cost is constantly being reduced.Such an observation highlights the inadequacy of the use of this work unit to allocate the indirect cost. The method is not limited to providing more relevant and accurate cost information than traditional approaches.In this new approach, cost calculation is still regarded as a significant management instrument, but does not constitute an end in itself.One must first go through a phase of thorough analysis of the activities of the company [3].The objective is not any more to influence the level of costs (as in the cost center method through successive allocations for example), but rather to allow effective action on the activities which cause costs."People cannot manage costs, they can only manage activities which cause costs" [4]. A new dimension is thus given to ABC: ABM.A simple calculative aspect does not constitute the principal element of the method but it is augmented by adding to it a strategic and managerial dimension [5].By acting on the activities, it becomes possible direct actions towards long-term objectives.All the costs having the characteristic of being variable, they become usable in the decision making process, The method thus highlights the impacts of the various strategic choice parameters on the level of the activities.This is the origin of the Activity Based Management terminology or ABM. These statements call for a definition or model of the resource circulation process through the various activities and products of the company in the closest possible way to reality.It thus becomes possible to measure resource consumption of each activity and to value them.Likewise, it becomes possible to measure resource consumption of each process.All these information make it possible to carry out a reflection on both value added and non value added activities.Porter's value chain theory [6] can then be largely applied.The business policy thus becomes oriented towards an optimization of the difference between the created value and the costs rather than being limited to a simple minimization of the costs as it is largely done in traditional approaches. The basic principle of the method can be described the following way: products / services consume activities which consume resources.To be noted that products and services can be as-similated to processes, compound or not.The relationship between these three conceptual building blocks can be expressed with two relations: consumes and is necessary for.Processes consume sub-processes or activities which in turn consume resources.Likewise, resources are necessary to fulfill activities which in turn are necessary to fulfill processes.These relationship are shown in Figure 1. If it is possible to measure each stage of this relation, then the cost price is perfectly defined.It requires to collect / capture the relevant information on consumption of resources and activities in the considered system.An ABC/ABM analysis thus requires a detailed modeling of the processes of the analyzed company, business, organization.It must describe as perfectly as possible the processes, activities and resources involved in the analyzed system.This approach is orthogonal to the organization and does not take into account aspects of the organizational structure. Road Map to Putting to Work an ABC/ABM Approach The ABC/ABM model is a methodological framework that needs to be interpreted in order to be used.The approach described below is drawn from an operational experience of using ABC/ ABM based on real cases and situations.It can be decomposed into four major steps graphically shown in Figure 2.This approach will be called the Target in the rest of this work. First Step: Setting Up the Referential Practical experiences show and emphasize some difficulties in constituting such a referential.One of them being to define an appropriate modeling depth.It is often bypassed through simpli- Analysis Calculation Capture Figure 2 The target: four steps of an ABC/ABM approach fication.Other approaches are even more questionable as they are based on general models of analysis where processes are defined a priori at a macro level. Another difficulty relates to the validity and the perenniality of the obtained model.It is important to distinguish the dynamic process which really consumes activities and evolves over time from a static procedure describing a "must be" seldom subject to fast evolution.Thus, one must be careful to distinguish the process aspect from the procedure aspect.the operational and functional reality will allow to bring out the effective management variables acknowledged by their actors.The modeling technique that we retain relies on a bottom-up approach decomposed the following way: 1.A detailed model (activities and resources) A global process model or business process model The resulting referential can be considered as a dictionary holding all the relevant information about the business in terms of processes, activities and resources which must be regularly updated and maintained. Second Step: ABM Analysis The objective of this stage is to identify the performance indicators to be set up.These performance indicators are to be understood as the set of measures allowing to evaluate the production system at the operational level both in terms of quality and quantity.Such indicators must satisfy the following two aspects (i) determine the relationship between resource consumption and output, (ii) determine the means to act on this relationship. These performance indicators are used to build the management instrument panels or control panels.They must be defined for any relevant responsibility level without any aggregation of information.It must always be possible to recover elementary information from any starting point.There are several types of management instrument panels or control panels: • Operational: providing information on process operation.They are primarily measurements of activities and resource consumption • Managerial or strategic: showing the state of the system with respect to valued objectives. Each management instrument panel or control panel is composed of the relevant indicators with respect to the needs of its user (i.e., the person in charge) allowing to take action through it.In other words one could say: a panel for everyone.Such tools are temporary by definition and evolve according to the referential evolution. Third Step: ABC Calculation ABC calculation consists in determining the financial value of the real resource consumptions by the activity drivers (i.e., that induce activities) and resource drivers (i.e., that induce resource consumption). It is important to note that the philosophy of the method is to turn all costs into direct costs with respect to an activity thus eliminating the cost allocation problem.In practice, the distance between this principle and reality mainly depends on two factors being: the quality of the referential and the available information.By definition, the calculation model evolves according to he evolution of the referential.Cost calculation is thus a consequence rather than an objective of the approach. Fourth Step: Data Capture Basically, all the meaningful information of all the relations defined in the referential should be captured.This implies collecting in a database all the events relating to processes, activities and resources in their raw elementary state (i.e., non aggregated information).Such information should be preserved as much as possible: • as raw information in order to allow "zooms" up to the finest desired level of detail from the management instrument panels or control panels • with historical information covering a broad enough time period to be able to identify tendencies and trends. In practice, the available information come from two sources: the accounting system and the production management system.But these information sources only seldom provide satisfactory information because they rely on organizational structures (e.g., an accounting plan does not reflect processes).Other information sources such as human resource timing systems are equally unsatisfactory as they introduce a bias due to their strong control aspects provoking behaviors masking reality. Software tools that alleviate these difficulties exist for example HyperABC and its evolution Metify of the company Armstrong Laing [7] is probably the most complete.It relies on the following modules: • A module allowing to define a process and activity based structure • An analysis and reporting tool (Cristal Reports, hyper cubes, etc.) • An ABC compliant calculation engine • A powerful data acquisition module able to import data from most information systems The third point is not commonplace since a process can in practice operate as well for a client as for itself (e.g., computer support).In the best case, it is possible to import data several times a year to calculate costs.In the worst cases, which are rather common, it is necessary to redevelop all or part of the information system taking into account this time the defined referential.The difficulty thus arises from the inadequacy between the referential and the information system.It is the main hurdle to the development of dynamic, sound and relevant management instruments. Ideally, each and every elementary event should be accessible as soon as it occurs, in a permanent way (i.e., persistent in time) and identifiable in the referential.These three conditions are essential to achieve time continuous management (i.e., dynamic measuring and management tools). Mobile Objects (Agents) as a Technology While the client server model has received significant attention in recent years, a whole new domain of research has emerged from the combination of progress achieved both in object oriented research and networks.Mobile objects, mobile computations, mobile agents or simply agents [8] [9] [10] have reached the level of being a research area raising a set of issues dealing with security, distributed systems, networks, etc. There are basically two agent communities.First, intelligent agents or multi-agents and second, mobile agents.The former, rises from artificial intelligence and is focused on knowledge representation, collaboration, behavior, avatars, etc.The later stemming from object oriented, network and distributed system research address the issue of moving behavior towards the source of data.An agent in this context can be considered as an object (compound or not) in the object oriented terminology having the following two characteristics: persistency and network awareness.Persistency in the sense the object holds state in a persistent way.Network aware in the sense the object has knowledge of network and is able to migrate between participating nodes of the network.This represents a major change for network applications thus leading to a new paradigm.Moreover, it represents a significant opportunity in the field of electronic commerce since agents can be considered as natural metaphors of commercial actors be they consumers, providers, intermediaries (i.e., brokers, facilitators) or even business objects themselves thus becoming active. From these research issues and driven by market demand, a number of prototype systems and languages supporting this agent paradigm have been implemented (Emerald [11], General Magic's initial implementation: Telescript and current Java based technology: Odyssey [12], Obliq [13], D'Agents [14] formerly known as Agent TCL, IBM Aglets [15] [16], Object Space Voyager [17], Mole [18] [19], etc.) Some of which have already become commercial products which are now available on the market.However such systems represent a considerable challenge with respect to security issues.These have been identified and discussed thoroughly in [20] [21] and a prototype agent system called JavaSeal [22] [23] was implemented within the MEDIA project (Mobile Electronic Documents with Interacting Agents) [24]. The Active Business Object (ABO) Framework Having discussed the groundings and an enabling technology, it is now necessary to describe the ABO framework.ABO (Active Business Objects) identifies both a conceptual and an operational framework based on the ABC/M Target, previously shown in Figure 2 and the mobile object (agent) paradigm.Our starting point unfolds from this Target, and more specifically at the Referential level.This referential level can be assimilated to the notion of a business dictionary holding everything that is necessary to describes the business at a conceptual level (i.e., in terms of processes, activities and resources). The ABC/M model relies on a producer-consumer relationship between these three concepts that are necessary and sufficient to describe and thus model a Business: processes, activities and resources.We provide below a definition for each: • Process: a process is a notion freed from any organizational or structural dimension.It captures a functional dimension, transverse to the organization, oriented towards a production objective.In many cases although it is not a rule, processes, compound or not, can be interpreted as products.The notion of process is not primary or final in the sense that it can be refined within the model with sub-processes.Process interpretation and perception is thus dependent of the level of abstraction of its observer.Consequently, a process describes a functional objective of the business, composed of activities and/or sub-processes together with the necessary resources involved in their fulfillment. • Activity: an activity identifies an action which can be characterized by a verb and an object upon which the action applies.Activities are necessary in (i.e., consumed) fulfilling processes.Moreover, activities consume resources necessary to their fulfillment.The notion of activity can be considered primary or final in the sense it can not be refined within the model even if activities could be further decomposed into tasks.However this level of description is not needed as tasks can be considered attributes of an activity. • Resource: a resource identifies in a very broad sense any production factor whose consumption occurs within activities (i.e., which are necessary for fulfilling activities).The notion of resource can be considered primary or final in the sense it can not be refined within the model. Such businesses can be both real and/or virtual thus spanning a network of interconnected businesses unified by a common goal (commonly called a virtual enterprise).No matter what abstraction level (depth) is considered there will always be a corresponding Target describing it in terms of referential, management instruments (ABM), costing tools (ABC) and the corresponding relevant data and their acquisition mode.This is graphically shown in Figure 3. The ABO Model We have decomposed the ABOs in two distinct categories: Referential ABOs and Operational ABOs.The first category serving the purpose of capturing the business in terms of a model.The second category, the operational ABOs to capture the actual life (instances) of the business, through business objects.This keeping in mind that a central issue in the ABO framework is to be able at all time to measure everything and hence report activity through management instrument panels and allow action on the business to be taken both at the referential level and at the operational level.Although both categories will be modeled as agents, only the operational ABO agents will have migration capabilities.Referential ABO agents do not need mobility as they are bound to a given abstraction describing a business reason for which we consider them as static agents.The ABO model is shown graphically in Figure 4. The Referential ABO category can be further refined into three sub-categories following the ABC/ABM model.Namely: process, activity and resource Referential ABOs.This will allow to capture the model of the business in the referential upon which the Operational ABOs will be based.Furthermore having such referential objects will also allow to measure them as abstractions of the business.For example an order process referential ABO would not only allow to measure, trace and monitor the order operational ABO instances bound to it, but also the process itself and/or its activities and resources as abstractions of the business referential. The Operational ABO type can be seen from the framework stand point as a generic Business Object wrapper which is active (i.e.holds code), mobile (i.e., able to move around the network) and persistent (i.e., able to keep state between executions).It actually represents both a container and a vehicle for any instance of a business object.Since business objects are encapsulated within agents which are active software entities that the recipient(s) must execute (in order to get access to them), the provider's infrastructure (i.e. the particular Business Information System) can include code not only defining the structure of the business information and how it should be displayed, but also how to protect itself and interact with the recipient(s) asking, for example, authorization passwords, verifying the integrity of its contents, decrypting sensitive parts of the information, allowing the recipient(s) to interrogate it and obtain basic information about its contents, and even send messages through the network to any other involved party (e.g.bank(s), transport and insurance companies, subcontractors, e.t.c.).Furthermore ABOs are likely to interact with each other and trigger actions on their own in given situations (e.g., for a certain task two ABOs need to reach a certain state and trigger a specific action within a specified time frame; if one of the ABOs did not reach the required state within the specified time frame, a recovery action can be triggered by it or its peer ABO).Such Operational ABOs will be traveling within the business among the different activities composing the process to which it is attached as well as among various businesses and consumers thus enabling virtual business processes and interactions.This requires that the Operational ABOs be able to secure their content in clusters accessible only to their legitimate users. From there on, it becomes possible to measure and monitor these agents in a way similar to placing probes within them.As a result, managers and decision makers can be given tools allowing them to build management instrument panels showing in real time the indicators for which they have expressed interest.These indicators can reflect information from either the referential level (i.e., Referential ABOs) or from the operational level (i.e., Operational ABOs).Furthermore, it also becomes possible to browse and navigate through the ABOs to audit the business trying to identify poor business patterns and malfunctions and thus take corresponding actions.Finally, goal oriented simulations and scenario evaluation become possible. The ABO Architecture Based on previous experiences gained in agent based systems, we have designed and implemented a prototype framework (Hep) for the commercial distribution and exchange of electronic documents over open networks such as the Internet [25].This relates to what is also known as secure content encapsulation and superdistribution.Although the issues are very different, it appears that from an infrastructure stand point they share many similarities.Thus building upon this experience and the existing Hep framework towards our goal seams to be a promising direction where Active Business Objects could be considered as another class of electronic publishing application centered around notions of Active Business Objects and management tools.Such a layer must provide a clear interface (API) to the application layer thus allowing to build ABO based applications and interfaces to existing applications.The Hep framework has been successfully used in the implementation of HyperNews [26] [27] [28] using the JavaSeal agent execution platform. The publishing paradigm fits well in this context.Literally, publishing means to make known, to announce, to release.In the same vein, one can consider that an business object instance (i.e., an Operational ABO) moving around the network is published in a restricted way (i.e., usually on a one to one basis) at each step of the business process to which it belongs.Furthermore, changes in objects that are measured by indicators in management instrument panels are also published to those having registered for notification. ABO Based Tools Having described our approach through a model for Active Business Objects and an agent based architecture for their secure distribution and exchange, we now consider the anticipated tools for operating Active Business Objects.Among the most important we have identified the following: The first tool that is needed to operate Active Business Objects is a business modeling tool.It will allow to define the business referential in terms of processes activities and resources based on the bottom-up modeling approach described previously.Its role is to capture and instantiate the business specific referential. Given a referential, a tool is needed to browse and navigate through it.Such an ABO browser or navigator can be implemented easily using any standard Web browser.In a similar way, the ABO browser can also be used to browse Operational ABOs. In order to be able to do measurements and thus build the management instrument panels, a toolbox is needed to allow managers and decision makers to extract the needed indicators and information from both the referential and the operational ABOs in a way similar to placing a probe within them.Finally cost information should be able to be witnessed in a way reflecting real resource consumption as advised by the ABC/ABM model. Conclusion The history of management is probably at a turning point at least as far as information systems are concerned.When accounting was done by hand, one could reasonably defend that one or two analysis a year were enough (profit and loss, balance sheet).Nowadays, observing information a-posteriori after a long time period is not reasonable anymore.Current systems are too often traditional accounting systems that have been mechanized with more or less efficiency.Moving management analysis from a yearly basis to a monthly basis certainly is a progress, but capturing information directly at the source of consumption would be far more efficient. In fact, the world in which we live is to be considered in its complex and continuous aspect and not in a sequence of states having transitions that remain unexplained.The systems that should be controlled are alive and evolve at fast pace.It is thus advisable to analyze them as such, and to understand the resource consumption functions that underlie them.To progress in this direction, the development of management and measuring instruments in continuous and real time is impossible to circumvent, particularly with the fantastic technological developments achieved in recent years. In this paper we have proposed an inter-disciplinary approach (management and technology) towards real time management and global business object exchange infrastructure.It stems from the combination of a technology known as mobile objects or agents and a management framework called ABC/ABM used as a methodology.Many issues still remain open and will need further investigation as we have only sketched a vision based on solid experience in both fields of management and technology.However, we are confident that the proposed reasoning will be useful in progressing towards innovative management and information system cross-fertilizations thus setting the grounds enabling new heuristics.We anticipate in the short term to build a proof of concept prototype in order to evaluate and assess the Active Business Object approach on a representative set of application scenarios based on real cases drawn from the industry. Figure 1 Figure 1 Basic principle of the ABC/ABM model Figure 3 Figure 3 Different abstraction layers of an ABC/ABM approach Figure 4 Figure 4The ABO Model
6,550.6
2000-01-04T00:00:00.000
[ "Computer Science", "Business" ]
Synthesizing Complex-Valued Multicoil MRI Data from Magnitude-Only Images Despite the proliferation of deep learning techniques for accelerated MRI acquisition and enhanced image reconstruction, the construction of large and diverse MRI datasets continues to pose a barrier to effective clinical translation of these technologies. One major challenge is in collecting the MRI raw data (required for image reconstruction) from clinical scanning, as only magnitude images are typically saved and used for clinical assessment and diagnosis. The image phase and multi-channel RF coil information are not retained when magnitude-only images are saved in clinical imaging archives. Additionally, preprocessing used for data in clinical imaging can lead to biased results. While several groups have begun concerted efforts to collect large amounts of MRI raw data, current databases are limited in the diversity of anatomy, pathology, annotations, and acquisition types they contain. To address this, we present a method for synthesizing realistic MR data from magnitude-only data, allowing for the use of diverse data from clinical imaging archives in advanced MRI reconstruction development. Our method uses a conditional GAN-based framework to generate synthetic phase images from input magnitude images. We then applied ESPIRiT to derive RF coil sensitivity maps from fully sampled real data to generate multi-coil data. The synthetic data generation method was evaluated by comparing image reconstruction results from training Variational Networks either with real data or synthetic data. We demonstrate that the Variational Network trained on synthetic MRI data from our method, consisting of GAN-derived synthetic phase and multi-coil information, outperformed Variational Networks trained on data with synthetic phase generated using current state-of-the-art methods. Additionally, we demonstrate that the Variational Networks trained with synthetic k-space data from our method perform comparably to image reconstruction networks trained on undersampled real k-space data. Introduction Deep learning-based MRI reconstruction methods show promise in faithfully reconstructing MR images from undersampled k-space measurements, but such methods are usually hampered by a lack of paired and diverse training data, posing a barrier to effective clinical translation of these technologies. Current deep learning MRI reconstruction techniques use datasets [1][2][3][4][5][6][7][8][9][10][11][12][13] containing paired images and raw k-space MRI data and have enabled major advances in MRI reconstruction methods. However, they are limited in several ways. Magnitude images contained in these datasets are sometimes preprocessed which can lead to biased results for MRI reconstruction [14] and are hard to standardize. Furthermore, these publicly available datasets are typically limited in anatomy, acquisition parameters and pathology information. Recent studies have shown that such limitations can sometimes result in hallucinations of structures or artifacts during deep learning-based MRI reconstruction [15,16], limiting the generalization potential of these methods and their clinical use. There could be significant advantages to leveraging the diversity of existing clinical MRI databases as they contain a range of patient populations, anatomy, pathology, image contrasts, acquisition parameters, and data from different vendors. This would be particularly useful for multi-task networks, e.g., [17], that perform both image reconstruction and a downstream task such as segmentation or classification. Training on more diverse and representative datasets can also greatly contribute to improving deep learning reconstruction models, especially for rare anatomies and pathologies; this could potentially allow for greater clinical adoption. However, we cannot simply use clinical datasets for MRI reconstruction algorithm development because they typically only contain magnitude images while image phase information is discarded. Furthermore, MRI data are acquired from multi-channel RF coils, but clinical images show a coil-combined image and, thus, the multi-channel information is lost. MRI phase data are important because they contain information related to contrast from chemical shift, magnetic susceptibility differences, inhomogeneities in the main magnetic field, RF coils used, fat/water separation, tissue interfaces, blood flow, and temperature change [18][19][20][21][22][23]. Additionally, recent studies have shown that using complexvalued neural networks which operate on data that include phase information produce higher quality reconstructed images [24,25]. Thus, the ability to recover or generate image phase from already completed scans could increase the utility and applicability of deep learning MRI reconstruction methods. While a variety of techniques aim to synthesize different MRI contrasts [26][27][28][29][30][31] or parameter maps [32,33], relatively few techniques exist to synthesize MR image phase and complexvalued multi-coil data. Recent studies have included methods to generate synthetic image phase by emulating very specific physical models [34], generating sinusoidal phase [35], or have focused on fine tuning training datasets consisting mostly of natural images [36]. To the best of our knowledge, no methods have attempted to broadly synthesize realistic MRI phase maps. To address this, we present a method for synthesizing realistic MRI data, including image phase and multi-channel information, from magnitude-only images that, for example, are found in clinical imaging archives. Our method leverages recent advances in deep generative modeling [37,38] to generate synthetic MRI phase images from input MRI magnitude images. Corresponding coil sensitivity maps are derived and then used to generate synthetic multi-channel data. The resulting synthetic multi-coil MRI data, including synthesized image phase, were then evaluated for their ability to be used in image reconstruction tasks by training a Variational Network [39] and comparing to a network trained on real multi-coil MRI data. Our results show that the proposed method (i) generates realistic looking MR phase maps, (ii) outperforms current methods used to generate synthetic phase data for training reconstruction models and (iii) image reconstruction networks trained on synthetic multi-coil data perform comparably to the same networks trained on real data. Our findings suggest that this framework has the potential to address the limitations that exist in current MRI datasets used for reconstruction tasks where access to raw k-space data is required. Methods We first start by defining k-space, magnitude, and phase in mathematical terms. We then describe generating synthetic phase images from input magnitude-only images using a conditional generative adversarial network (GAN) framework. Finally, we describe the evaluation of the synthetic data quality. Preliminaries The signal acquired from a 2D slice (assuming we can neglect T 2 decay) in the spatial frequency domain, or k-space, can be expressed as: where m(x, y) is the signal generated at the position (x, y). This is a complex quantity which is equivalent to m(x, y) = m x (x, y) + jm y (x, y), where m x (x, y) is the real component of the signal and m y (x, y) is the imaginary component of the signal. The goal of MRI reconstruction is to recover m(x, y) from M(k x , k y ). The MR signal, m(x, y), is generated by the rotation of the transverse components of the net magnetization. The signal is complex-valued because it is a measurement of both the x and y components of the net magnetization. The majority of MRI scans are interpreted based on the magnitude of the signal, |m(x, y)|, which corresponds to the amplitude of the transverse magnetization. There is also information encoded in the phase (also known as angle) of the signal, ∠m(x, y), which corresponds to the rotation angle of the transverse magnetization. This includes chemical shift, magnetic susceptibility differences, inhomogeneities in the main magnetic field, RF coil profiles, fat/water separation, tissue interfaces, and blood flow. Neural Network Architecture The generator is a 16-layer U-Net [40] with skip connections and the discriminator is a 70 × 70 PatchGAN [38]. In this setup, the discriminator, in a convolutional manner, decides if a patch is real or fake. We used a PatchGAN discriminator to restrict the network's attention to the structure of local image patches. This encourages the discriminator to penalize the structure at the scale of patches rather than the whole image (as in a typical binary classifier) in order to effectively capture high-frequencies in the synthetic image. In a sense, PatchGAN acts as a classifier itself. The main difference is that the output of the PatchGAN is an N × N array in which each element signifies whether the corresponding patch in the image is real or fake. We chose a patch size of 70 × 70 based on the results of previous studies [38], which empirically found that this patch size gives the best tradeoff between image sharpness and alleviating artifacts in the generated image. Each generative model was trained with a batch size of 1. We used minibatch stochastic gradient descent (SGD) with the Adam optimizer [41] using a learning rate of 2 × 10 −4 and momentum parameters β 1 = 0.5 and β 2 = 0.999. Additional implementation details can be found in [38]. Synthetic Phase Generation The conditional GAN uses a hybrid objective consisting of two loss functions: a conditional adversarial loss function and a regularized 1 distance loss function. In essence, we trainined the model to generate high-frequency structures in the synthetic image, and we used the 1 loss to control how many low-frequency structures were present in the image. where x is the input magnitude image, y is the generated synthetic phase image that corresponds to x, and z is the latent vector. In this objective, G tries to minimize the objective while D tries to maximize it. This setup is suitable for our aim because the discriminator is conditioned on the input image x, and we have access to the raw groundtruth data, and thus also to the ground-truth phase data. The network was optimized by alternating between gradient descent steps conducted for optimizing the discriminator and the generator, similar to the approach as described in the original GAN paper [37]. Specifically, we trained a U-Net to predict the phase component from input magnitudeonly images. During training, this synthetic phase component was compared to the ground truth phase using a hybrid objective (3). This mixed loss function balances realistic looking phase images via the adversarial loss and encourages less blurring via the 1-norm. Each GAN model was trained for 50 epochs with a qualitative analysis of realistic synthetic phase maps being the main stopping criteria. During inference, the trained U-Net was used to generate synthetic phase from previously unseen magnitude images, resulting in the creation of realistic synthetic MRI phase data. Multi-Coil Data Generation To generate synthetic multi-coil k-space data, we first analytically converted the input magnitude and generated synthetic phase images to real and imaginary components. Sensitivity maps were then generated using the ESPIRiT [42] algorithm on corresponding ground-truth raw data from the training dataset. The resulting sensitivity maps were multiplied with the real and imaginary components to generate multi-coil synthetic k-space data. This resulting complex-valued data can be used in place of ground truth k-space data to train deep learning based MRI reconstruction networks. Dataset Multi-coil k-space data obtained from the fastMRI [1] dataset were used for training the conditional GAN. The dataset consists of raw complex-valued k-space with both magnitude and phase information of brain scans at 1.5 T and 3 T. The images were acquired with a fast spin echo (FSE) pulse sequence with an echo train length (ETL) of 4. For training, we divided the dataset into two datasets with 16-coil and 20-coil acquisitions. Each dataset consisted of T 1 -weighted, T 2 -weighted, and FLAIR contrast images. Generative We trained two generative models: A 16-coil model and a 20-coil model, trained on 22,691 and 18,519 magnitude-only brain images, respectively. During training, the U-net generator generated a synthetic phase image and the discriminator compared the generated image to the corresponding ground truth phase image (obtained from fastMRI) in a convolutional patch-wise manner. At inference time, magnitude-only images from the fastMRI test set were run through a forward pass of the trained generative models. In our experiments, this enabled the generation of 6541 synthetic phase images for the 16-coil model and 5845 synthetic phase images for the 20-coil model. Evaluation: Physics-Based Image Reconstruction To evaluate the utility of complex-valued multi-coil k-space data synthesized from the generative model, we compared the quality of reconstructed MR images from reconstruction networks trained on ground-truth and synthetic data. Multiple equispaced undersampling masks of acceleration factors R = {4, 6, 8, 10} with a center fraction of 0.04 were applied to k-space data to be used for training. Two Variational Networks [39] were then trained for 10 epochs with the 16-coil and 20-coil datasets each. Each Variational Network was trained separately on synthetic and ground truth multi-coil k-space with a 80/10/10 training/validation/test split for a total of four trained reconstruction networks per acceleration factor. Each trained reconstruction model (ground-truth and synthetically trained) was then run on the same ground-truth test set. The quality of reconstructed magnitude images was evaluated using standard quantitative image reconstruction metrics: PSNR, NMSE, SSIM [43]. We decided to use the Variational Network for evaluation because of its reliance on undersampled multi-coil k-space and coil sensitivity maps as inputs into the unrolled network. Please see the Figure 1. All models (generative and reconstruction) were implemented in PyTorch and were trained on NVIDIA (Santa Clara, CA) Titan RTX and Quadro RTX 8000 GPUs. Figure 1. The proposed synthetic raw data generation and image reconstruction pipeline. The generative model takes magnitude images as an input seed and produces plausible synthetic phase images as output, which are trained to match ground truth phase images from the dataset. Synthetic complex-valued data is obtained by combining the input (ground truth) magnitude image and synthetic phase image to yield real and imaginary components. Estimated sensitivity maps calculated with ESPIRiT from the training dataset are then applied to synthetic complex-valued multi-coil data to compute multi-coil k-space encoded with synthetic phase information. The synthetic raw data were evaluated by training a Variational Network using undersampled k-space data. Figure 2 shows sample comparisons between synthetic and ground truth phase images. The synthetic phase images show several expected features, including low spatial-frequency components, a noisy background, and tissue phase contrast, for example, between the ventricles and adjacent brain tissue. We do not expect the synthetic phase to exactly match the ground truth phase because the MRI phase is not deterministic and can vary based on B 0 homogeneity and RF coil induced phase shifts. In some cases, blocking artifacts have appeared, possibly due to the PatchGAN discriminator. Figures 3 and 4 show representative images reconstructed with Variational Network models trained with undersampled ground truth and synthetic k-space data, correspondingly. For R = {4, 8} acceleration factors, the reconstructed images trained on synthetic data contain slightly more error structures compared to the images trained on ground-truth data. However, visually, there are no obvious artifacts in the reconstructed images in either method. For the R = 8 acceleration factor, we can see more errors in high resolution features, possibly due to the lack of high frequency details in the synthetic phase images used to train the reconstruction network. Figure 2. Representative ground truth magnitude, ground truth phase, and synthetic phase images generated from the conditional GAN. Synthetic phase images show expected features, including appropriate noise patterns, low spatial-frequency components and tissue contrast between the ventricles and nearby brain tissue, but exhibit some blocking artifacts possibly from the patchGAN discriminator. The reconstruction networks were trained on ground truth data, synthetic data (from our proposed method), sinusoidal phase data, data with random phase and data with zero phase. From the plots, Variational Networks trained on undersampled synthetic data perform comparably to the same networks trained on ground truth undersampled k-space at R = {4, 6} as measured by PSNR, NMSE and SSIM. At R = {8, 10} acceleration factors, the performance of networks trained on synthetic data dips, especially the SSIM curve, but remains relatively comparable to that of the networks trained on ground truth data. Additionally, the networks trained on synthetic data outperform networks trained on sinusoidal phase data in all quantitative metrics for the 20-coil dataset. For the 16-coil dataset, similar results were observed for the PSNR and NMSE measurements, while the performance in the SSIM metric was comparable to the sinusoidal phase trained network. Discussion There is a massive amount of magnitude-only images as this is what is typically stored in clinical imaging databases (e.g., PACS), which do not usually contain phase and multicoil information or raw k-space data. This work proposes a framework to generate synthetic multi-coil MRI data from magnitude-only MR images, and evaluates its utility by training a deep learning-based image reconstruction network using the synthesized datasets. The demonstrated framework aims to allow for the use of these large imaging databases for developing data-driven methods that require MRI raw data. We chose to evaluate using a Variational Network image reconstruction model as a proof of concept to demonstrate the effectiveness of the method. We believe a more significant opportunity for such a synthetic data pipeline is to train multi-task networks, e.g., networks that perform both image reconstruction and a downstream task such as image segmentation or classification [17]. In these methods, the synthetic data pipeline can take advantage of existing clinical images and annotations for the downstream tasks, enabling the creation of customized datasets for multi-task machine learning techniques. Other approaches that generate synthetic MRI training data typically build on natural image datasets. For example, in [34], the authors simulated signal voids in MR images by randomly applying masks to natural images to generate synthetic data. Additionally, in [35,36], the authors used a natural image dataset and a magnitude-only MRI dataset, respectively, and modulated the training images with a sinusoidal phase at a random frequency. They demonstrated that training with this synthetic data showed substantially higher levels of aliasing artifacts compared to using real MRI data. The proposed generative modeling approach shows more realistic image phase maps that include both the low-frequency features, which these prior methods aimed to incorporate, as well as contrast based on the underlying tissues and anatomy (Figure 2). Our quantitative results ( Figure 5) suggest that encoding this tissue phase information (not just low-frequency or sinusoidal phase information) into training data for deep learning models adds more useful information for the network to learn higher quality image reconstructions. The authors of [36] observe that deviations in SNR, acquisition type, and aliasing patterns between training and testing times can result in widely varying image reconstruction quality. With this in mind, future experiments can extend our work to exploit the synthetic data pipeline and large clinical imaging databases to generate custom heterogeneous datasets to train more robust and generalizable image reconstruction models. In addition to generating synthetic phase maps, a major aim of this work was to generate multi-coil data to increase the clinical relevancy of the technique. We take advantage of the well-established coil sensitivity map algorithm ESPIRiT [42] to estimate coil sensitivities instead of trying to learn them directly. This approach requires running ESPIRiT on prior ground truth data from fastMRI and, thus, a paired dataset with magnitude and ground truth phase information is still required for this part of the method for image generation. In previous experiments, we tried to generate multi-coil data by adding a two-channel real and imaginary component to the output of the conditional GAN. This would result in generated synthetic real and imaginary images for N coils from a single magnitudeonly image input. While this approach produced reasonable phase maps and comparable reconstructions for generative models trained on data acquired with a small number of coils (e.g., N = 4), the phase maps resulting from generative models trained on N = {16, 20} number of coils suffered from large amounts of structure hallucination and blocking artifacts. We hypothesize that, during training, gradients across multiple individual coil images are ill-behaved and, thus, GAN models trying to generate a large number of coil images have difficulty converging. The advantage of our proposed technique is that it is coil-agnostic; it can be applied to MR images acquired with any number of coils with the generative model learning a one-toone mapping from magnitude to phase. This results in more stable training and gradient flow, especially for GANs. It is important to note that we do not expect the synthetic phase maps to be necessarily consistent with the ground truth phase maps. This is because MR phase is not deterministic, and can vary due to tissue composition, scan parameters such as TE, magnetic field homogeneity, and the RF coil configuration and loading. This inconsistency would be problematic for performing any quantification on the synthetic maps themselves. However, consistency with ground truth phase for individual datasets is not required when the synthetic data are used for training, but rather the synthetic phase should be consistent with population-level phase patterns. Nevertheless, enforcing a physics-based consistency between the input magnitude image and output phase image by adding a regularized term in k-space to the training objective could be a useful followup experiment to this work. Such a change could result in even more representative phase maps. However, GAN stability during training with this new objective remains an open question and would have to be answered empirically. In lieu of this, a score-based generative model could be used for this technique due to their improved training stability compared to GANs [44]. A current limitation of this study is that only fast spin-echo (FSE) images from the fastMRI database were used to train the generative model. The exclusion of gradientrecalled echo (GRE) acquisition data in the training dataset makes the trained generative models and downstream reconstruction models susceptible to distribution shift errors. To address this limitation, future work could include fine-tuning the generative models trained on FSE data with GRE data. Additionally, quantifying the uncertainty in distributions not seen at inference time as proposed in [45] could provide insight into how the generative model is synthesizing phase images on a pixel-wise basis. Finally, the evaluation of generative models, especially for synthetic medical imaging data, is still an open research direction [46]. While this study used an unrolled image reconstruction network to evaluate the utility of the synthesized complex-valued multi-coil data, other methods, e.g., the Inception Score [47] or FID score [48], could be used to characterize the distribution of that data. Incorporating a customized implementation of these distance metrics based on medical imaging datasets [49] could be more fruitful in characterizing synthetic MR phase images. This information could also possibly be used to direct generative model training to synthesize datasets customized for specific downstream tasks. Conclusions This work presents a new method for synthesizing realistic, multi-coil MRI data from magnitude-only images that uses a GAN to generate image phase and ESPIRiT-generated coil sensitivity maps. The synthetic data were evaluated by comparing the reconstruction performance of Variational Networks trained on real k-space and synthetic k-space data. Our results suggest that the proposed method for generating synthetic data (i) outperforms the current state-of-the-art methods for creating synthetic image phase and (ii) is adequate for training deep learning MRI reconstruction models at typical acceleration factors (up to 10×), shown by the Variational Networks results. Taken together, our results suggest that image-to-image translation generative adversarial networks are able to generate MRI phase images that are both realistic-looking and can also provide a good performance when used for training an image reconstruction network. This allows for the possibility of using large, diverse clinical imaging databases that contain magnitude-only images when developing deep learning MRI reconstruction methods. Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/bioengineering10030358/s1, Table S1: PSNR values for VarNet trained on different types of phase at various acceleration factors for the 16-coil dataset. Bold values indicate the best performing type of phase (not including ground truth); Table S2: PSNR values for VarNet trained on different types of phase at various acceleration factors for the 20-coil dataset. Bold values indicate the best performing type of phase (not including ground truth); Table S3: NMSE values for VarNet trained on different types of phase at various acceleration factors for the 16-coil dataset. Bold values indicate the best performing type of phase (not including ground truth); Table S4: NMSE values for VarNet trained on different types of phase at various acceleration factors for the 20-coil dataset. Bold values indicate the best performing type of phase (not including ground truth); Table S5: SSIM values for VarNet trained on different types of phase at various acceleration factors for the 16-coil dataset. Bold values indicate the best performing type of phase (not including ground truth); Table S6: SSIM values for VarNet trained on different types of phase at various acceleration factors for the 20-coil dataset. Bold values indicate the best performing type of phase (not including ground truth).
5,664.4
2023-03-01T00:00:00.000
[ "Engineering", "Medicine", "Computer Science" ]
Improved V-shaped interior permanent magnet rotor topology with inward-extended bridges for reduced torque ripple : Interior permanent magnet synchronous machines (IPMSMs) with V-shaped permanent magnet (PM) rotors are widely used as traction motors in electric vehicles because of their high torque density and high efficiency. However, the V-shape IPMSMs have the disadvantages of inevitable torque ripple due to the non-sinusoidal air-gap flux density distribution and the utilisation of the reluctance torque. In this study, with the aim of improving the torque ripple characteristics, a modified V- shaped IPMSM rotor configuration with bridges extended inwards towards the pole centre is proposed to generate a more sinusoidal air-gap flux density waveform. The proposed topology, referred to as ‘Type C’ within this study, is compared with baseline rotor configuration references, namely ‘Type A’ which is a conventional V-shaped PM rotor, as well as ‘Type B’ which is a related configuration with a mechanically non-uniform air gap. The analysis results show that the rotor ‘Type C’ exhibits significant advantages in terms of reducing cogging torque, torque ripple and radial force, without incurring additional air-gap friction losses. Finally, a prototype of the IPMSM with the proposed rotor configuration is manufactured and tested, verifying the predicted benefits experimentally. Introduction With the increasingly stringent emission requirements and with many countries setting dates eliminating sales of ICE-based vehicles, research and development on electric vehicles (EVs) is at an all-time high. The electrical machine is at the heart of EV architectures, hence the current focus is on increasing its power density or cost performance to targets set by national bodies such as the department of energy (DoE) in the USA or the advanced propulsion centre (APC) in the UK [1,2]. The interior permanent magnet synchronous machine (IPMSM) is an excellent candidate for EV propulsion, and in fact many automotive traction machines adopt this topology [3,4]. Differently from the surface-mounted permanent magnet (PM) synchronous motors, there is an asymmetry of the reluctance between the d-axis and the q-axis, in which saliency can be used to produce reluctance torque in addition to the magnet torque, thus improving the efficiency. In addition, the IPMSMs provide more rotor robustness as magnets are embedded in the rotor laminations; hence, no additional retention sleeving is required. Various IPMSM technologies have been studied in the literature. In [3], the V-shape, the U-shape, the spoke-type and the tangential-type IPMSMs are compared. The spoke-type motor has the widest constant power speed range (CPSR) and the lowest average output torque. On the contrary, the tangential-type motor has the highest average output torque and the worst CPSR. The Vshape motor has good average output torque and lower torque ripple. In [4], the saliencies of three IPMSMs with the flat-shape, V-shape and U-shape rotor topologies are investigated considering the magnetic cross-coupling effect. The V-shaped PM motor has the highest saliency ratio. In [5], the demagnetisation characteristics of PMs in four IPMSMs with the flat-shape, the Vshape, the flux concentrated and the flux concentrated-V rotor topologies considering the non-linearity of PMs are investigated. It is found that the PMs of flat-shape and V-shape rotors have better demagnetisation characteristics due to the reduced armature reaction influence on them. Kim et al. [6] also performed a study on the demagnetisation characteristics of PMs, and the results show that the PMs in the flat-shaped PM rotor are much easier to be demagnetised under the maximum torque operation with respect to the V-shaped magnets. In summary, considering the many important factors in play, the IPMSM with V-shape rotor seems to be one of the better compromises among the other IPM configurations, hence its more widespread adoption in commercial hybrid electric vehicle (HEV)s/EVs. Despite the aforementioned merits of the IPMSM with a Vshape rotor, issues still exist, setting up the scope for further research investigations. It has the disadvantages of inevitable torque ripple due to (i) the non-sinusoidal air-gap flux density and (ii) the utilisation of the reluctance torque [7]. As the torque ripple affects the vibration and noise (and hence drivability), it should be reduced [8]. There are various V-shaped PM rotor variables which influence the torque characteristics of IPMSMs, and several methods have been shown to improve the torque characteristics of the V-shape IPMSM. In [9,10], the skewed slot stator is adopted in order to reduce the cogging torque and the torque ripple while in [11][12][13], the step-skewed rotor is used to achieve similar effects. These are in fact widely adopted traditional methods, which lead to an increase of the manufacturing cost/time and the decrease of the average output torque. In order to reduce the torque ripple without detriment to the average output torque, an asymmetrical V-shape rotor configuration of an IPMSM is presented in [14]. This is proven to be an effective way to reduce the torque ripple albeit there exists a limitation to single rotation direction (i.e. unidirectional benefit). Based on the research work in [14], a novel asymmetrical V-shape rotor is proposed in [15]. The novel rotor structure exhibits satisfactory torque characteristics, without the limitation to rotation direction. Another kind of asymmetrical rotor by designing a non-uniform air gap is studied in [16,17], with the sinusoidal air-gap flux density waveform achieved by adjusting the non-uniform air-gap length. A similar rotor structure is also studied in [18], which proposes an asymmetrical rotor with non-uniform distribution of holes. In addition, the bridges and ribs of the Vshape rotor are studied in [19][20][21], where it is shown that these can be optimised to maximise the fundamental air-gap flux density and minimise its total harmonic distortion (THD). This work presents a modified V-shaped PM rotor topology which reduces the torque ripple by extending the bridges further IET Electr. Power Appl., 2020, Vol. 14 Iss. 12, pp. 2404-2411 © The Institution of Engineering and Technology 2020 inwards towards the pole centre. This paper is organised as follows. Section 2 describes the origins of torque ripple together with the prior-art concept of mechanical pole shaping as a means to reduce it. Based on the aforementioned, the proposed concept 'Type C' is introduced as an alternative way of achieving similar performance benefits with the key advantage of having a uniform physical air gap. The electromagnetic performances of the topology are investigated in Section 3, while Section 4 verifies the mechanical suitability. The prototype with the proposed rotor concept is built and experimentally validated in Section 5, and finally in Section 6 conclusions from this research are discussed. IPMSM for EV traction The electric motor is at the heart of the electric propulsion system in EV, and thus it has to satisfy multiple important requirements such as high output torque at low speeds, high power output at high speeds and high efficiencies within a wide speed range. Besides, the cogging torque and torque ripple should be reduced, because these can produce mechanical vibrations and acoustic noise, thus degrading the performance of IPMSM drives, especially at lowspeed operations. Mathematically, the cogging torque T cog of this kind of machine is described as part of the magnetic torque, and can be expressed as: where Φ is the magnet flux crossing the air gap, R is the total reluctance through the flux paths and θ is the rotor angle. It is clear that if the reluctance R does not vary as the rotor rotates, then the cogging torque T cog is zero. From this point of view, the cogging torque can be improved by changing the V-shape rotor structure variables which affect the reluctance R. In terms of torque ripple, a slight mismatch between the back electromotive force (EMF) of the machine and the current often produces torque ripple. As the windings of the IPMSM are excited with three-phase sinusoidal currents, it is crucial to obtain the sinusoidal back-EMF waveform as well as the sinusoidal air-gap flux density. Thus, it is important to consider the air-gap flux density B g , which can be expressed as where F g is the air-gap magnetomotive force, R g is the air-gap reluctance, A g is the cross-sectional area of the air gap, μ 0 is the permeability of vacuum and l g is the air-gap equivalent length. The range of θ varies from − 90° to 90° in the electrical angle, then (3) can be expressed as a periodic function: where B max is the maximum air-gap flux density. According to (4), in order to obtain a sinusoidal air-gap flux density distribution, B g is maximum at the d-axis position, when θ is equal to zero. Moreover, under the ideal condition, the air-gap flux density varies sinusoidally with the change of θ. Then the airgap equivalent length l g is a function of θ. From the foregoing discussion, based on the principle of generating sinusoidal air-gap flux density waveform and cogging torque, an effective method to reduce both the cogging torque and torque ripple is by using the V-shaped PM rotor structures which have variable air-gap equivalent length. Fig. 1a shows the conventional rotor structure, hereafter referred to as 'Type A'. Rotor 'Type B', shown in Fig. 1b, achieves a more sinusoidal air-gap flux density distribution by mechanically shaping the rotor surface, resulting in a non-uniform mechanical air gap, as in [16,18], while rotor 'Type C' achieves a similar effect, but rather than by mechanical pole shaping, this is done through the use of the elongated bridge as evident from Fig. 1c. The studied V-shaped PM IPMSM is used as a traction motor in EVs, and the specifications of the motor are listed in Table 1. Based on the specifications, the three rotor types are developed and analysed. The following conditions are satisfied in order to compare the merits and demerits of the three rotors in Fig. 1: (i) the stator core, winding and shaft are identical; (ii) the same amount and shape of PM are utilised; and (iii) the width of the bridge and rib, and α of the three IPMSMs are of the same value. The detailed values related to the magnetic bridge are listed in Table 2. In general, IPMSMs can be analysed under the synchronous frame, as the stator winding is energised by the dand the q-axis currents, respectively. The motor performance can be expressed by the dand the q-axis inductances. However, the actual value of the inductances varies non-linearly due to the magnetic saturation, hence the finite element method (FEM) is used to calculate the performance for the IPMSM in this paper. Electromagnetic performance comparison γ.1 No-load characteristics The no-load flux density distributions of the three IPMSMs are shown in Fig. 2. As can be seen, the three motors have the same flux density at the ribs. The flux density distribution within the bridge of the proposed rotor 'Type C' is significantly different from the others. Along this elongated bridge the iron is saturated, with the level of saturation varying from 2.12 T down to 2.04 T. In other words, the varying air gap effect is obtained through the magnetic design rather than the physical pole shaping. The air-gap flux density waveforms of the three motors are shown in Fig. 3. The fundamental component of air-gap flux density B and the THD of the three motors are shown in Table 3. For the motor with the rotor 'Type A', the distribution of air-gap flux density is lower and more distorted. The air-gap flux density distribution of the motor with the rotor 'Type B' is more sinusoidal than that of the motor with the rotor 'Type A'. In fact, the rotor structure with non-uniform air gap formed by shaping the rotor polar surface is an effective measure to improve the air-gap flux density distribution. The motor with the rotor 'Type C' has the best air-gap flux density distribution with the largest fundamental amplitude and the smallest THD. The reason is that the leakage flux is reduced and more flux is concentrated along the d-axis of the rotor. The cogging torque waveforms of the three motors are shown in Fig. 4. It can be seen that the cogging torque of motor with the rotor 'Type A' is the highest among the three motors. Similarly, comparing with the motor with rotor 'Type A', the cogging torque is reduced by 39.59% with rotor 'Type B' and by 73.52% with rotor 'Type C'. Consequently, the utilisation of the proposed rotor 'Type C', as with the case of the 'Type B', can effectively obtain a more sinusoidal distribution of the air-gap flux density, which translates to significantly positive effects on reducing the cogging torque. γ.β Load characteristics The electromagnetic properties of an IPMSM at the rated working point are a good criterion to evaluate their performance. The rated power and the rated speed of the IPMSM studied in this paper are 35 kW and 4000 rpm, respectively, as per Table 1. To comprehensively compare the difference of the three motors, the maximum torque per ampere (MTPA) control strategy and the i d = 0 control strategy are considered, respectively, in this section. Then the electromagnetic properties of the three motors at the rated working point are calculated by FEM, and the results are shown in Table 4. It can be seen that the currents of motors with rotor Type C and Type B are smaller than that of the motor with rotor Type A under the i d = 0 control strategy due to their higher back-EMF and higher power factor. When using the MTPA control strategy, the currents of the three motors are reduced significantly due to the inherent reluctance of the topology. As the currents required to produce the rated power get smaller, the efficiency and power factors increase correspondingly. The currents of the motors with the physical air-gap shaping 'Type B', and with inward-extended bridge 'Type C' are slightly higher than that of the first (conventional) one, which indicates some loss of reluctance torque. Correspondingly, their optimal advance angles are smaller at the rated working point. The reluctance torque of the three motors (Types A-C) is calculated, as 22.52 N m (26.93% of the rated torque), 21.35 N m (25.54% of the rated torque) and 18.17 N m (21.73% of the rated torque), respectively. Less reluctance torque is produced when rotor 'Type B' or rotor 'Type C' is adopted. To delve into this point further, their inductances at different currents are calculated, and shown in Fig. 5. As shown in Fig. 5, the q-axis inductance (L q ) and the ratio of q-axis inductance to d-axis inductance (L q /L d ) of the three motors all reduce with the current due to the saturation effect, albeit the variations of L d are less affected by the current. The motors with rotor 'Type B' or 'Type C' have smaller L q or L q /L d in the entire range of current. The reluctance torque not only depends on L d but also on the ratio L q /L d . From the foregoing discussion and analysis less reluctance torque is produced when rotor 'Type B' or rotor 'Type C' is adopted in the IPMSMs. The waveforms of torque of the three motors at the rated condition are shown in Fig. 6. They achieve similar average torque; however, the motor with rotor 'Type A' has the highest torque ripple and the motor with rotor 'Type C' has the lowest on-load torque ripple. Their torque ripples are 33.75, 25.51 and 19.98%, for rotor Types A, B and C, respectively. The air-gap friction losses of the IPMSM with the non-uniform air gap (rotor Type B) and with the uniform air gap (rotor Type C) are calculated numerically for the case of the rotor rotating in the air as shown in Fig. 7. The air-gap friction losses increase with the rotor speed, with the difference in losses between the two cases becoming more significant at higher rotational speeds. The benefit Type B Type C power, kW 35 35 35 35 35 35 speed, rpm 4000 4000 4000 4000 4000 4000 current, A 196 194 190 160 162 163 advance angle, deg 0 0 0 29 28 26 voltage, V 174 171 167 146 144 of the uniform air gap to reduce the air-gap losses is even more pronounced if the presence of oil particles is considered in the air gap. The traction motors are required to have an ability to operate within a wide range of speeds efficiently. In EVs this is even more important, as efficiency is directly linked to the driving range and hence also linked to the overall customer acceptance. Under the MTPA control strategy, the steady-state efficiency characteristics of the three motors with a DC bus voltage of 320 V are calculated. Table 5 shows the statistics of the efficiencies of the three motors. The maximum efficiencies of the three motors are almost the same. Efficiencies above 80% of the three motors are spread over a large area of the operation region. The motor with rotor 'Type A' has a slightly larger area of the operation efficiencies above 95%; however, the motor with rotor 'Type C' has a larger area of the operation efficiencies above 80, 85 and 90%. As EV motors are operating most of the time across fairly wide power-speed ranges (contrary to say those in HEVs), it is likely that rotor 'Type C' would give more driving range out of a single battery charge, but this conclusion would necessitate further vehicle-level analysis considering motor residency points over the applicable driving cycle [22]. Radial force analysis Due to the demanding requirements by the industry for a comfortable passenger environment, the vibration and noise of traction motors require careful consideration. The vibration and noise in IPMSMs can be classified into three main areas based on its source: (i) electromagnetic, (ii) aerodynamic and (iii) mechanical [23][24][25]. The electromagnetic source is the dominating one in low-to medium-power rated machines [26]. Cogging torque, torque ripple and radial force F r are the main electromagnetic sources of noise and vibration. F r is the normal component of the magnetic force, which is caused by the interaction between the rotor PMs and the stator teeth. A simple way to predict the noise and vibration of IPMSMs is by analysing the radial force. The FEA of IPMSMs with the three different rotor structures is carried out, from which F r is deduced, consisting of different time harmonics which are analysed using a Fourier series, the results of which are shown in Fig. 8. From the simulation results, it can be seen that not only the time harmonics, but also the spatial harmonics of radial force can be attenuated by the proposed rotor structures 'Type C'. 4.β Mechanical stress analysis A crucial issue of the V-shape rotor design is that, the bridges and ribs should have sufficient mechanical strength to resist the high centrifugal force, particularly at the maximum speed. The materials M270-35A and N38UH are used for the laminations and PM, respectively, as a compromise between cost and performance. Their mechanical properties are shown in Table 6. Mechanical FEA is performed on the rotor assembly. Fig. 9 shows the von-Mises stresses of the three rotors at 12,000 rpm. As shown in the figures, the maximum stresses indicated by 'Max' of the three rotors on the stress plot are at the ribs and their values are well below the yield strength of M270-35A (450 MPa). From this it follows that the ribs are the most critical part to ensure the mechanical strength of the rotors. Since less lamination is used per pole in the rotor 'Type B', the maximum stress at the rib is smaller than that of the other two rotors. Conversely, the proposed rotor with extended bridges 'Type C' has the highest von-Mises stress at the rib albeit its value is still safely below the steel's yield strength by a safe margin. Fig. 10 shows the tensile stresses of the magnets in the three rotors at 12,000 rpm. It can be seen that the maximum tensile stress on the magnets of rotor 'Type C' is higher due to the inhomogeneous deformation of the lamination, which is almost double that of rotor 'Type A'. Nonetheless, the maximum mechanical strength of the magnets in the proposed rotor 'Type C' does not exceed the tensile strength of N38UH, by an acceptable safety margin. Experimental verification From the previous sections and foregoing analysis it has been shown that the IPMSM with conventional rotor 'Type A' has the highest cogging torque and torque ripple together with the worst noise and vibration behaviour which can be obtained indirectly by calculating the radial force. On the other hand, the IPMSM with the proposed rotor configuration has the lowest air-gap flux density THD, cogging torque, torque ripple and radial force, without resorting to mechanically uneven air gaps and their associated increased air-gap friction losses. Taking into account all the considerations discussed in this paper, the IPMSM with rotor 'Type C' is more satisfying considering comprehensive system-level considerations and it is selected as the machine to prototype and test. The prototype of V-shape IPMSM with rotor 'Type C' is manufactured as shown in Fig. 11. The comparison results of phase back-EMF between simulation and experiment are shown in Fig. 12. It can be seen that the measured and calculated phase back-EMF waveforms are in good agreement. Fig. 13 shows the torque waveforms of the FEA and experimental results at the rated condition under the MTPA control. The torque waveform of the prototype was calculated by FEA matches obtained by the experiment with a small discrepancy, of <3%. The torque ripple is lower than 20%, which experimentally validates the foregoing conclusion that the rotor 'Type C' is an appropriate structure to reduce the torque ripple. In order to make a more comprehensive evaluation of the prototype, the curves of the maximum torque and the corresponding power and efficiency versus speed are also measured as shown in Fig. 14. The V-shape IPMSM with rotor 'Type C' meets the design specifications listed in Table 1. Therefore, it is concluded and verified that 'Type C' is a strong candidate rotor structure for IPMSMs used in EVs with good torque characteristics. Conclusion In this paper an improved configuration for V-shaped IPMSM with inward-extended bridges is proposed, analysed and compared to both the classic V-shape rotor configuration as well as the previously published rotor concept with the uneven air gap. It has been shown that the elongated bridge has the effect of making the air-gap flux density more sinusoidal and through the analysis and experimentation its main merits of reducing the cogging torque, radial force and torque ripple without sacrificing the average output torque, the maximum torque and the efficiencies demonstrated. From the authors' experience, for EV traction motors to meet performance and cost targets, there is an increasing recent trend to design for higher rotational speeds as well as use direct oil-spray cooling on the windings [27] with some resulting oil penetration in the air gap. Both these factors contribute to significant increases in air-gap mechanical losses, especially if the air gap is non-uniform. The presented rotor topology of this paper melds well with the aforesaid recent trends, and can thus be well suited for such applications where the torque quality needs to be maximised while mitigating the air-gap friction losses. IET Electr. Power Appl., 2020, Vol. 14 Iss. 12, pp. 2404-2411 © The Institution of Engineering and Technology 2020
5,446
2020-09-17T00:00:00.000
[ "Engineering", "Physics" ]
The Mantle Viscosity Structure of Venus The long‐wavelength gravity and topography of Venus are dominated by mantle convective flows, and are hence sensitive to the planet's viscosity structure and mantle density anomalies. By modeling the dynamic gravity and topography signatures and by making use of a Bayesian inference approach, we investigate the viscosity structure of the Venusian mantle by constraining radial viscosity variations. We performed inversions under a wide range of model assumptions that consistently predicted the existence of a thin low‐viscosity zone in the uppermost mantle. The zone is about 235 km thick and has a viscosity reduction of 5–15 times with respect to the underlying mantle. Drawing a parallel with the Earth, the reduced viscosity could be a result of partial melting as suggested for the origin of the asthenosphere. These results support the interpretation that Venus is a geologically active world predominantly governed by ongoing magmatic processes. 2 of 11 Kiefer & Peterson, 2003). Meanwhile, regional gravity and topography analyses showed that several Venusian highlands, mostly the plateaus associated with tessera terrains, were compensated by crustal thickness variations in contrast to the volcanic rises that have important support from deep mantle sources (Grimm, 1994;Maia & Wieczorek, 2022;Simons et al., 1997;Smrekar & Phillips, 1991). Several studies have tested the impact of radial mantle viscosity variations on the predicted gravity and topography of Venus, either adopting a dynamic loading model (Herrick & Phillips, 1992;Kiefer et al., 1986;Pauer et al., 2006;Steinberger et al., 2010) or making use of 3D thermal evolution models (Huang et al., 2013;Rolf et al., 2018). The vast majority of these studies focused on the possibility of a viscosity jump at a depth analogous to the 660 km phase transition on Earth, which corresponds to about 730 km on Venus (Armann & Tackley, 2012), and found that the existence of such a feature was inconsistent with the gravity and topography observations. Alternatively, Pauer et al. (2006) made use of Monte Carlo inversions along with the dynamic loading model to estimate the viscosity structure of Venus. Their study showed that the viscosity of the mantle likely increases gradually with depth, and that there could be a thin low viscosity channel in the upper mantle. The moment of inertia and k 2 Love number of Venus could be used to investigate the viscosity profile as well, but they are not known with sufficiently accuracy to well-constrain relative variations with depth (see Figure 6 of Saliby et al., 2023). In this work we used state-of-the-art inversion methods and data analysis techniques to constrain the mantle viscosity structure of Venus. We adopted the multitaper spatio-spectral localization method of Wieczorek and Simons (2007) to remove shallowly compensated regions from the analysis, and the viscosity estimations were performed using a Bayesian inference approach (Speagle, 2020). A variety of assumptions concerning boundary conditions and the density variations in the mantle were tested to assess the robustness of our results. In particular, we investigated scenarios where the density anomalies are concentrated within a single thin layer at a specific depth (e.g., Herrick & Phillips, 1992) or where they are uniformly distributed with depth in the mantle (e.g., Pauer et al., 2006). Ultimately, our study aims to contribute to a better understanding of the geodynamic and tectonic regime of Venus (e.g., Rolf et al., 2022) and to elucidate how the geologic histories of Earth and Venus diverged. Dynamic Loading Model The earliest seismic tomography studies of Earth showed that lateral variations of mantle temperature are correlated with long-wavelength geoid anomalies (e.g., Dziewonski et al., 1977). These observations motivated a series of studies to develop a dynamic loading model which allows for a quantitative interpretation of gravity in terms of mantle dynamics (Hager & Clayton, 1989;Ricard et al., 1984;Richards & Hager, 1984). The model consists of computing the instantaneous viscous flow that is predicted by the imposed density anomalies in the mantle for a given viscosity structure. The flow induces deformations of the planet's surface and core, generating dynamic topography and associated gravity anomalies. For this model, the mantle is treated as an incompressible Newtonian fluid whose viscosity varies only with depth. Moreover, time-dependent and inertial forces are neglected. With these conditions, and by making use of a spherical harmonic decomposition of variables with an angular dependence, the problem can be written as a system of ordinary differential equations for each spherical harmonic degree which can be solved analytically using a propagator matrix technique. The solution is propagated from the core-mantle boundary to the surface with defined boundary conditions, passing through an arbitrary number of constant viscosity layers. We assume a free-slip boundary condition at the core-mantle boundary, whereas for the surface we evaluate both no-slip and free-slip end-member cases. The model consists of developing depth-and viscosity-dependent kernels as a function of spherical harmonic degree, which describes how the planet adjusts to a unitary mantle load at radius r. In particular, we are interested in the gravity kernel G ℓ and topography kernel H ℓ , and these are computed following the approach of James et al. (2013) that includes an elastic lithosphere (see Section S1 in Supporting Information S1). Once the kernels have been computed, we can estimate the dynamic gravity and topography as the convolution of the kernel with the imposed density anomalies in the mantle δρ ℓm (r), as follows: where R cmb is the radius of the core-mantle boundary, R is the mean planetary radius and l and m are respectively the spherical harmonic degree and order. Figure S1 in Supporting Information S1 demonstrates how the kernels depend upon the viscosity profile and depth of the mass anomalies in the mantle. Since tomography models are not available for Venus, we treat the mantle density anomalies as an unknown that will be determined in our inversion procedure. To make the inversion tractable, we either assume that the anomalies are concentrated in a single thin layer at a specified depth (Herrick & Phillips, 1992;James et al., 2013), or that the density anomaly has the same value at all depths (Kiefer et al., 1986;Pauer et al., 2006). The predicted gravity for the single mass-sheet is with ϕ ℓm representing the surface density anomaly (in kg m −2 ) at radius R ϕ . The equation for the case where the density anomaly is constant with depth is given in Section S2 in Supporting Information S1. Following the approach of Pauer et al. (2006), each coefficient ϕ ℓm is computed such that it minimizes the difference between the observed and predicted gravity and topography (see Section S3 in Supporting Information S1). Localized Bayesian Inversion Although mantle flows play an important role in shaping the long wavelength gravity and topography of Venus (e.g., Kiefer et al., 1986;Phillips & Malin, 1983), there are major topographic highlands that are mainly a result of crustal thickness variations (e.g., Kucinskas et al., 1996;Maia & Wieczorek, 2022;Simons et al., 1997). These shallowly compensated regions are inconsistent with the assumptions of the global dynamic loading model and Pauer et al. (2006) found that the worst predictions from their inversions were for the highlands of Ishtar Terra and Ovda Regio. They attempted to remove these signals by applying a binary mask to the gravity and topography, and then computing a localized power spectrum, but binary masking procedures have well known spectral leakage problems (e.g., Wieczorek & Simons, 2005). In order to more rigorously remove from our analysis the signals associated with the compensated highlands, we employ the multitaper spectral analysis technique as developed by Wieczorek and Simons (2005); Wieczorek and Simons (2007). Our analysis region excludes Ishtar Terra and Western Aphrodite Terra (see Figure 1), and following the approach of Simons et al. (2006) we constructed orthogonal localization windows using a specified spectral bandwidth. For a bandwidth of ℓ win = 3, there are a total of 9 windows that concentrate more than 99% of their power in the region of interest. This number of windows provides an acceptable uncertainty for the localized spectra, and the small spectral bandwidth provides a large number of uncorrelated spectral estimates (see Section S6 in Supporting Information S1). The results of the multitaper localization are shown in Figure 1. We make use of the VenusTopo719 topography model (Wieczorek, 2015a) and the MGNP180U gravity solution (Konopliv et al., 1999), both of which are based on the final Magellan mission datasets. The map in panel (a) shows the total power of the 9 localization windows summed in the space domain with the target localization region outlined by the white contour. In panel (b) we present the global and localized spectral admittance and correlation of gravity and topography (see Section S6 in Supporting Information S1 for the definition of these quantities). The localization leads to an increase in the admittance of about 30% over the entire spectrum, which is caused by the exclusion of highland regions that have high topography and low gravity. The correlation also shows a significant increase in the long-wavelength range due to the data localization, for ℓ < 40 the average correlation increases from 0.81 to 0.89. The localized spectra of gravity, topography, and admittance (shown in Figure S2 in Supporting Information S1) are then used to invert for the mantle viscosity structure. These observations are compared with similarly localized spectra predicted by the dynamic loading model. One important aspect of the model is that the predicted gravity and topography are only sensitive to relative viscosity variations, and that the absolute viscosity cannot be 4 of 11 constrained. When considering the case where the mantle density anomalies are modeled as a single mass-sheet (Equation 3), the depth of the density anomaly is taken as a free parameter. The model depends on other parameters such as the core radius, the core-mantle density contrast and the average elastic lithosphere thickness, which are fixed in this study (see Table S1 in Supporting Information S1). The mantle and core related parameters are from Aitta (2012) and are based on the Venus-scaled preliminary reference Earth model. The adopted values are consistent with moment of inertia estimates (Margot et al., 2021) and other interior modeling studies (Dumoulin et al., 2017) of Venus. The elastic thickness of the lithosphere was set to zero. As discussed in Section S5 in Supporting Information S1 and shown in Figures S3-S5 in Supporting Information S1, the choice of these parameters has only a negligible impact on our results. To statistically evaluate the uncertainties of our model estimations, and considering the relatively large number of free parameters in our problem, we opted for a Bayesian sampling technique that provides the posterior probability of each parameter. We made use of the DYNESTY package, which is a Python implementation of the dynamic nested sampling method (Speagle, 2020). Nested sampling estimates the marginal likelihood and the posterior distribution by sampling within nested shells of increasing likelihood. The likelihood function adopted in our inversions is described in Section S4 in Supporting Information S1. Inversion Results Even though we performed inversions for a wide range of scenarios (see Section S5 and Figures S4 and S5 in Supporting Information S1), we chose to focus our analysis on the three cases that presented the largest variations in the results. The nominal case has a no-slip boundary condition at the surface and the mantle density anomaly is parameterized by a single mass sheet. The free-slip case differs from the nominal by having a free-slip boundary at the surface that allows for tangential movement of the surface, while the δρ-constant case has constant density anomalies with depth in the entire mantle along with a no-slip boundary at the surface. For these three inversions the number of constant-viscosity layers was set to four, with each layer being specified by its viscosity η i and depth to the bottom of the layer d i . Since our model is only sensitive the relative viscosity variations, we set η 1 = 1. Assuming that the core radius is known, the viscosity structure is defined by six free parameters. The nominal and free-slip scenarios have the mass-sheet depth d ϕ as an additional free parameter. Given the lack of information about our parameters, we considered for our priors a uniform distribution for the depth-related parameters and a log-uniform distribution for the viscosities. The only strong prior we set was to assume that the viscosity of the uppermost lithospheric layer was greater than the underlying layer (i.e., log 10 (η 2 /η 1 ) < 0). Such an assumption is a natural feature of temperature dependent rheological models (e.g., Breuer & Moore, 2015) and was used previously by Pauer et al. (2006). Figure 2 presents the posterior probability distribution of each free parameter for our three scenarios. The upper four panels are for the depths of the first three viscosity layers and depth of the mass-sheet. The bottom panels correspond to the viscosity ratios of the second, third, and fourth layers with respect to the overlying layer. Positive ratios indicate an increase in viscosity with respect to the layer above while negative ratios indicate a decrease in viscosity. All parameter estimations are detailed in Table S3 in Supporting Information S1. Our results show that all scenarios consistently prefer shallow depths for the base of the lithospheric layer, with values less than about 200 km (Figure 2a). In contrast, the viscosity decrease to the underlying layer ( Figure 2e) is relatively unconstrained, which is partially a result of the small thickness of the uppermost layer. The viscosity interface between the second and third layer is the best constrained from our inversions (panels 2b and 2f). All three model scenarios indicate that the third layer increases in viscosity by about one order of magnitude at a depth of about 245-435 km, with median values for the mass-sheet depth of 239 km for the nominal case and 329 km for the free-slip case. The change in viscosity between the third and fourth layers is quite variable and differs in all three loading scenarios (panels 2c and 2g). For the nominal case, both the layer depth and viscosity are poorly constrained, although the solutions prefer depths larger than 1,000 km. The free-slip case tends to prefer larger depths and lower viscosities for the deepest layer, with the most probable case corresponding to no change in viscosity. The δρ-constant model, on the other hand, has a well-constrained viscosity increase of about 10 times at 1,300-1,550 km depth. In Figure 3 the viscosity profiles for all solutions are shown via 2-dimensional posterior distributions for our three loading scenarios. Since one of the best constrained aspects of our model is the increase in viscosity between the second and third layer, for a better visualization, we scaled our viscosity profiles such that the viscosity of the second layer was 1. The solid curves in these figures represent the logarithmic mean viscosity at each depth. The upper mantle structure is similar for the three scenarios, with a consistent viscosity increase occurring between the second and third layers. As for the lower mantle, the loading scenarios that use a single mass-sheet (Figures 3a and 3b) indicate an isoviscous structure, although some of the solutions suggest a basal low-viscosity layer above the core. As for the δρ-constant model (Figure 3c), we see a second viscosity jump at about 1,400 km depth. Upper Mantle Low Viscosity Zone Our inversions indicates the presence of a zone beneath the lithosphere characterized by viscosity values that are roughly 10 times lower than the underlying mantle. Its thickness is about 150-300 km, starting at the base of the lithosphere down to a depth of 268-435 km. This low viscosity zone can be interpreted as an asthenosphere-like layer. The asthenosphere of Earth is a mechanically weak layer starting beneath the lithosphere and that extends to the top of the transition zone at about 400 km depth. It is considered to be a key ingredient for plate tectonics (e.g., Rolf et al., 2022) and its existence has been supported by several geophysical methods. On Earth, the region is characterized by high electrical conductivity, low seismic velocities, and strong seismic attenuation (e.g., Shankland et al., 1981). In oceanic regions, seismological observations have shown that the lithosphere-asthenosphere boundary occurs sharply at about 70 km depth, while for the sub-continental mantle the seismic signature of this boundary is weaker and deeper, at about 200 km depth (Karato, 2012). Gravity investigations considering postglacial rebound and/or dynamic loading commonly indicate that the asthenosphere of Earth is also characterized by low viscosities, although the published estimates present some discrepancies (see reviews by King, 2016;Richards & Lenardic, 2018). Estimations of the low viscosity zone (LVZ) are generally associated with reductions in viscosity of one to three orders of magnitude with respect to the underlying mantle. Some studies indicate that the LVZ is fully contained within the asthenosphere (Forte & Mitrovica, 2001;Hager & Clayton, 1989), others suggest that the zone extends to the base of the upper mantle at 660 km depth (King & Masters, 1992;Liu & Zhong, 2016), and others find a low viscosity asthenosphere along with a thin low-viscosity channel at the 660 km transition (Mitrovica & Forte, 2004). Several factors could be responsible for these differences in interpretation. In particular, there is a well-known trade-off between the thickness and viscosity of the low-viscosity zone (Richards & Lenardic, 2018) and gravity studies have difficulties in accounting for lateral viscosity variations. Strong lateral viscosity variations could exist on Earth as a result of subducted slabs and differences in mantle structure beneath oceanic and continental crust (Čadek & Fleitout, 2003). However, such variations are likely to be less important on Venus given its lack of plate tectonics and different mode of heat transport, which is probably associated with regional-scale delamination scattered throughout the planet (e.g., Davaille et al., 2017;Gülcher et al., 2020;Lourenço et al., 2020;Smrekar & Stofan, 1997). Gravity and topography studies of Venus have mostly argued against the existence of a low-viscosity zone in the upper mantle (e.g., Nimmo & McKenzie, 1998). However, most of those studies either did not perform inversions and limited the analysis to a few models representative of Earth-like scenarios (e.g., Herrick & Phillips, 1992;Kiefer et al., 1986;Kiefer & Hager, 1991;Steinberger et al., 2010). Moreover, gravity investigations prior to 1992 used data from the Pioneer Venus mission whose resolution was limited to degree 18. From a different perspective, studies by Huang et al. (2013) and Rolf et al. (2018) estimated geoid anomalies for Venus using three-dimensional mantle convection models. They showed that the presence of a thick LVZ, ranging from the base of the lithosphere down to the ringwoodite-bridgmanite phase transition at 730 km depth, was inconsistent with the observations. These results are in fair agreement with our study that predicts a LVZ thickness that is about half of the upper mantle thickness. In addition, we note that Monte Carlo inversions performed by Pauer et al. (2006) showed that many models were consistent with the presence of a LVZ with a thickness of a couple hundred kilometers. The origin of such a low viscosity layer on Venus is arguable. As a starting point, we may draw a parallel with Earth and evaluate the mechanisms that have been proposed to explain the existence of its asthenosphere. However, this is a heavily debated topic with several proposed hypotheses. Some studies proposed that the asthenosphere results from the presence of small amounts of partial melts in the upper mantle (Anderson & Spetzler, 1970;Chantel et al., 2016;Debayle et al., 2020;Hua et al., 2023), while others consider that the region is better explained by a subsolidus regime associated with rheological weakening of mantle rocks under temperatures that are close to the solidus (Karato, 2012;Takei, 2017). For the latter hypothesis, it has been suggested that dissolved water in olivine could effectively reduce the viscosity (Hirth & Kohlstedt, 1996). Alternatively, low viscosities in the uppermost mantle could be caused by the predominance of dislocation creep, with larger depths being dominated by diffusion creep with higher viscosities (Semple & Lenardic, 2021;Van Den Berg & Yuen, 1996). Finally, viscosity interfaces in the mantle could be linked to mineralogic phase transitions (Meade & Jeanloz, 1990). of 11 Even though the surface and atmosphere of Venus are dry, the water content of its interior is unknown. Both Venus and Earth should probably have accreted similar amounts of volatiles (e.g., O'Brien et al., 2018), but the abundance of these volatiles in Venus's interior is poorly known (e.g., Gillmann et al., 2020Gillmann et al., , 2022Way & Genio, 2020). The existence of partial melt in Venus's mantle seems plausible, given the indications that the planet is still volcanically active, particularly in regions associated with active mantle plumes (e.g., Gülcher et al., 2020;Herrick & Hensley, 2023;Mueller et al., 2008;Shalygin et al., 2015;Smrekar et al., 2010). Moreover, recent studies have shown that large amounts of magmatic intrusions could play a primary role in the mobility of the lithosphere and crustal recycling, representing an efficient mechanism for heat loss (Lourenço et al., 2020;Smrekar et al., 2023;Tian et al., 2023). Lastly, tomography investigations by Debayle et al. (2020) indicate that hotspots on Earth are associated with high melt content in the upper mantle. Following the well-established experimentally-derived relation between melt fraction and viscosity (e.g., Hirth & Kohlstedt, 2003) we estimate that the low viscosity layer we find on Venus could be associated with 5%-11% melt. On the other hand, more recent experiments have shown that even very small interconnected fractions of melt can have a significant impact on the viscosity (Holtzman, 2016;Takei & Holtzman, 2009). From this perspective, our viscosity reduction estimations could be associated with as little as 0.05% melt (see Figure S6 in Supporting Information S1 for more details concerning these calculations). Finally, we note that the low viscosity zone and the possibly associated partial melt does not necessarily need to be uniform throughout the planet. In fact, considering that the largest gravity and topography signatures come from the volcanic rises it is likely that their signatures dominate our analysis and that our results are mostly representative of these geologically active regions. Consequences of the Mantle Load Parameterization Due to the lack of information regarding density heterogeneities in the Venusian mantle we were required to make some simplifications in our inversions. To assess the importance of these assumptions, we investigate further the two density anomaly parameterizations used in our study. Herrick and Phillips (1992) argued that since mantle plumes likely dominate the density anomaly distribution in Venus, most of the anomalies should be concentrated in a relatively thin and horizontal layer associated with the plume head beneath the lithosphere. In this scenario, a single mass-sheet parameterization would be more appropriate. On the other hand, Pauer et al. (2006), made a comparison to Earth's mantle density patterns arguing that plumes and subducted slabs penetrate the mantle more or less vertically, indicating that a depth-independent distribution of density anomalies could be a reasonable first approximation. Although the two different scenarios have a significantly different depth-dependence of the density anomalies, our results consistently predict a low viscosity zone in the upper mantle. For the deep mantle, however, clear differences in the viscosity structure were obtained. The single mass-sheet scenarios (Figures 3a and 3b) tends to accept a wide range of solutions for the deepest viscosity layer, with a large number of models consistent with an isoviscous mantle below the low viscosity zone. Nevertheless, we note that about 35% of the models present a viscosity decrease of over one order of magnitude from layer 3 to 4, which would correspond to a basal low viscosity zone. If confirmed, these regions could be analogous to the large low shear velocity provinces on Earth (e.g., French & Romanowicz, 2015) or indicate the presence of partial melting (O'Rourke, 2020). In contrast, the depth-independent case (Figure 3c) requires an increase in viscosity in the mid mantle between the third and fourth layers at 1,330-1,550 km depth, which is significantly deeper than the ringwoodite-bridgmanite phase transition. Interestingly, a comparable viscosity jump has been suggested for Earth by several dynamic loading investigations (e.g., Forte & Peltier, 1991;Rudolph et al., 2015Rudolph et al., , 2020, indicating an increase in viscosity at depths of about 800-1,200 km that corresponds roughly to 960-1,330 km on Venus. In any case, our results should motivate future studies to explore more realistic density anomaly distributions in Venus based on mantle convection simulations, or by using a more complex statistical distribution of mass anomalies as in Steinberger et al. (2010). The predicted density anomaly distribution estimated for the nominal loading model with the largest likelihood is shown in Figure S7 in Supporting Information S1. As expected, at large volcanic rises, such as Atla and Beta Regiones, we observe negative density anomalies, associated with positive buoyancy and commonly interpreted as regions of mantle upwellings, hotspots and active volcanism (e.g., James et al., 2013;Kiefer & Hager, 1991;Smrekar et al., 2010;Stofan et al., 1995). These anomalies are generally also correlated with regions of high heat 9 of 11 flow (Smrekar et al., 2023) and with regions where coronae could be still active (Gülcher et al., 2020). On the other hand, positive density anomalies correlate with volcanic plains which can interpreted as regions of mantle downwellings. These results are consistent with the density anomaly distributions found in previous studies (Herrick & Phillips, 1992;James et al., 2013;Pauer et al., 2006). Moreover, we note that the different loading scenarios investigated here have comparable horizontal density distribution patterns, although for the δρ-constant case the shorter wavelengths present relatively higher amplitudes (Herrick & Phillips, 1992). Conclusions Our study used a Bayesian approach to investigate the mantle viscosity structure of Venus. We employed a dynamic loading model to predict the planet's long-wavelength gravity and topography and compared these predictions to the observations. Using a range of model scenarios, we consistently found that Venus presents a low viscosity zone in the uppermost mantle, a layer that could be interpreted as an Earth-like asthenosphere, potentially resulting from partial melt in the upper mantle. This interpretation supports previous studies that proposed that Venus is currently an active volcanic world (e.g., Gülcher et al., 2020;Herrick & Hensley, 2023;Rolf et al., 2022;Smrekar et al., 2010). Moreover, our inversions disfavor a viscosity jump associated with the ringwoodite-bridgmanite phase transition at 730 km depth. One aspect that merits further investigation is how more realistic distributions of density anomalies in the mantle would affect the predicted viscosity profile. For example, one could use geodynamic simulations to estimate the density distribution based on temperature anomalies predicted by these models. Improvements on deep interior constraints for Venus could be achieved in the future by coupling dynamic loading to tidal deformation investigations. In particular, an integrated analysis would allow for an assessment of the absolute mantle viscosity profile. This approach will be particularly powerful once future missions obtain precise estimates of tidal Love numbers, tidal quality factor, and moment of inertia factor (Cascioli et al., 2021;Rosenblatt et al., 2021). Data Availability Statement A Python routine to estimate the gravity and topography predicted by the dynamic loading model can be found in Maia (2023). The localized spectral analysis was performed using the open-source package Pyshtools (Wieczorek & Meschede, 2018), while the posterior parameter distributions were estimated using the Dynesty package (Speagle, 2020). The spherical harmonic model of the gravity field used in this study can be found in Sjogren (1997), and the topography data set is from Wieczorek (2015b). The perceptually uniform color maps used in this work are from Crameri (2018).
6,403.6
2023-07-29T00:00:00.000
[ "Geology", "Physics" ]
Sodium pumps in the Malpighian tubule of Rhodnius sp Malpighian tubule ofRhodnius sp. express two sodium pumps: the classical ouabain-sensitive ( Na+ + K+)ATPase and an ouabain-insensitive, furosemide-sensitive Na+-ATPase. In insects, 5-hydroxitryptamine is a diuretic hormone released during meals. It inhibits the ( Na+ +K+)ATPase andNa+-ATPase activities indicating that these enzymes are involved in fluid secretion. Furthermore, in Rhodnius neglectus, proximal cells of Malpighian tubule exposed to hyperosmotic medium, regulate their volume through a mechanism called regulatory volume increase. This regulatory response involves inhibition of the ( Na+ + K+)ATPase activity that could lead to accumulation of active osmotic solute inside the cell, influx of water and return to the normal cell volume. Adenosine, a compound produced in stress conditions, also inhibits the ( Na+ + K+)ATPase activity. Taken together these data indicate that ( N + +K+)ATPase is a target of the regulatory mechanisms of water and ions transport responsible for homeostasis in Rhod ius sp. sorption in the proximal segment of the Malpighian tubule, hindgut and rectum.During diuresis, transcellular fluid transport across these insect epithelial cells is very fast (Phillips 1981, Nicolson 1993). In contrast to the herbivorous insects, the hematophagous insects eliminate urine with higher sodium than potassium concentration immediately after a meal (Maddrell et al. 1993b).It has been proposed that fluid secretion in the Malpighian tubule cells involves two principal transporters: 1) the Na + /H + or K + /H + exchanger; and 2) the V-type H + -ATPase. The H + -ATPase would create a proton gradient used by the Na + /H + or K + /H + exchanger to secrete Na + or K + into the tubular lumen (Nicolson 1993, Pannabecker 1995).Furthermore, Cl − could be secreted through a transcellular or through a paracellular route.Na + , K + and Cl − could move across the basolateral membrane via the furosemide and bumetanide-sensitive Na + /K + /2Cl − transporter. SODIUM PUMPS The (Na + + K + )ATPase is crucial for the survival of most cells (Sweadener 1989).The enzyme is an integral plasma membrane protein which actively transports three Na + to the outside of the cell and two K + to the inside, maintaining the electrochemical gradient across the cell membrane (Sweadener 1989).This ATPase is formed by two noncovalently linked subunits in an equimolar ratio: α and β (Xie & Morimoto 1995).The apparent insensitivity to ouabain of the stimulated fluid secretion in many insects tested led to the hypothesis that there was no (Na + + K + )ATPase in the Malpighian tubule cells.However, in the Malpighian tubule of Rhodnius, ouabain, on the basolateral side, increased unstimulated fluid secretion (Maddrell & Overton 1988, Nicolson 1993, Pannabecker 1995).The presence of (Na + +K + )ATPase in the Malpighian tubule was confirmed by Lebovitz et al. (1989) who cloned its α-subunit cDNA in the basolateral membrane of Malpighian tubule of Drosophila melanogaster.More recently, it was shown that a ouabainsensitive (Na + + K + )ATPase activity is present in the Malpighian tubule cells of Rhodnius prolixus (Grieco & Lopes 1997, Caruso-Neves et al. 1998a). Besides the (Na + + K + )ATPase, another sodium pump was found in Malpighian tubule of Rhodnius prolixus (Caruso-Neves et al. 1998b).This Na + -stimulated ATPase activity has the following characteristics: 1) K 0.5 for Na + = 1.49± 0.18 mM, 3) it is fully inhibited by 2 mM furosemide, 4) it is insensitive to ouabain concentrations up to 10 −2 M, 5) it is sensitive to vanadate indicating it to be a P-type ATPase, and 6) it is stimulated by namolar concentrations of Ca 2+ in the incubation medium. This Na + -ATPase has been described in several cell types (Proverbio et al. 1989, Moretti et al. 1991, Caruso-Neves et al. 1997, 1998b, 1999, Rangel et al. 1999).It was initially described in aged microsomal fractions from guinea-pig kidney cortex as an active Na + transporter not stimulated by K + (Proverbio et al. 1989).This pump has same distribution that (Na + + K + )ATPase, and is only found in the plasma membrane (Proverbio et al. 1989, Caruso-Neves et al. 1997, 1998b, 1999, Rangel et al. 1999).The Na + -ATPase of the Malpighian tubule cells from Rhodnius prolixus is inhibited by KCl in a dose-dependent manner with maximal effect observed at 5 mM (Figure 1).This inhibition is reversed by increasing the Na + concentration.These data indicate that K + could be a physiological modulator of the Na + -ATPase. Fluid Secretion Although the Malpighian tubules from Rhodnius sp.present two Na + pumps, their physiological role is still not clear.In general, it is postulated that the gradient created by (Na + + K + )ATPase in epithelial cells is used for transcellular transport.The observation that ouabain did not change the stimulated fluid secretion in many insect species tested lead some authors to postulated that the (Na + + K + )ATPase is not involved in fluid secretion (Nicolson 1993, Pannabecker 1995).However, it was observed that 5-HT, a diuretic hormone released during meals, inhibits the (Na + + K + )ATPase activity in Malpighian tubule cells from Rhodnius prolixus (Grieco & Lopes 1997).Thus, the modulation of the (Na + + K + )ATPase in the Malpighian tubule could be one of the regulatory mechanisms of fluid secretion.The inhibition of the (Na + + K + )ATPase could lead to intracellular accumulation of Na + and, consequently, to an increase of Na + secretion through the luminal membrane.Since the first step in the rapid excretion phase (during meals) is the elimination of an urine enriched in NaCl and water the inhibition of Na + reabsorption (due to inhibition of the (Na + + K + )ATPase activity) would be an important component in this phase.This hypothesis is supported by the observation that ouabain increases the fluid secretion in isolated and unstimulated Malpighian tubules (Lebovitz et al. 1989, Nicolson 1993). The possible involvement of the ouabaininsensitive Na + -ATPase on the fluid secretion has not been directly investigated yet.Nevertheless, it was observed that this enzyme is inhibited by 5-HT in a dose-dependent manner indicating that it is a target of regulatory mechanisms of water and ions transport responsible for homeostasis in Rhodnius prolixus (Grieco 1999). Stress Conditions The Malpighian tubule cells of Rhodnius sp. are exposed to different stress conditions.These cells are exposed to different osmolalities depending of the feeding state of the animal (Beyenbach & Petzel 1987, Nicolson 1993).After a blood meal the hemolymph osmolality decreases because the osmolality of the blood is lower than that of the hemolymph.On the other hand, during starvation, the osmolality of the hemolymph is increased.In this way, the presence of specific mechanisms of regulation such as cell volume regulation are necessary for the survival of the insect.In isosmotic conditions, cell volume regulation is explained by the "pump-leak" hypothesis in which the (Na + + K + )ATPase is crucial for maintaining Na + and K + gradients (Leaf 1959, Tosteson & Hoffman 1960).Furthermore, (Na + + K + )ATPase is also involved in cell volume regulation during anisosmotic shock (Hoffmann & Dunham 1995).During cell volume regulation there is a variation in the amount of osmotic active solute inside the cell (Hoffmann & Dunham 1995).Variation of the medium osmolality regulates several transport proteins (Yancey et al. 1982).Arenstein et al. (1995) have shown, through video-optical techniques, that proximal cells of the Malpighian tubule of Rhodnius neglectus exposed to hyperosmotic medium regulate their volume with a typical regulatory volume increase (RVI).On the other hand, when these cells are exposed to hyposmotic medium they are unable to regulate their volume completely.The addition of ouabain 1 mM did not change the RVI.Latter, we observed that hyperosmotic shock inhibited the (Na + +K + )ATPase activity but did not change the Na + -ATPase activity (Caruso-Neves et al. 1998a, Figure 2).So it is possible to postulate that cell volume regulation during hyperosmotic shock involves the inhibition of the (Na + + K + )ATPase activity.This effect leads to active osmotic solute accumulation inside the cell, influx of water and to the return of the normal cell volume. Adenosine is found in all living cells as part of the normal metabolic machinery and appears to be accumulated in different tissues in response to different stress conditions (Osswald et al. 1977, Olsson 1990).In addition, it has been observed that one of adenosine effects during stress is the modulation of ionic transport (Caruso-Neves et al. 1997).Furthermore, it was observed that adenosine increases fluid secretion in Malpighian tubule of D. melanogaster (Riegel et al. 1998).Recently, we tested the effect of adenosine on the (Na + + K + )ATPase activity of Malpighian tubule cells from Rhodnius prolixus and found that adenosine inhibits the enzyme activity in a dose dependent manner (Caruso-Neves et al. 2000). Taken together these data indicate that the (Na + + K + )ATPase of the Malpighian tubule of Rhodnius sp. is involved in insect water and ion An.Acad. Bras. Ci., (2000) 72 (3) balance, just as it is in mammalian tissues.On the other hand, the Na + -ATPase role in the insect physiology is still not clear. Fig. 1 - Fig. 1 -Dependence of Na + -ATPase activity on KCl concentration.The ATPase activity was measured as described by Caruso-Neves et al.(1998b).The KCl concentrations range from 0.1 to 120 mM and the final osmolality was adjusted to 320 mOsm/kg.The Na + -ATPase activity was calculated from the difference between the ATPase activity in the absence and in the presence of 2 mM furosemide, both in the presence of 1 mM ouabain.When indicated NaCl 2, 6 or 90 mM were added. Fig. 2 - Fig. 2 -Effect of hyperosmotic shock on Na + -ATPase and (Na + + K + )ATPase activities.The ATPase activity was measured as described by Caruso-Neves et al. (1998a).The final osmolality was adjusted to 320 mOsm/kg for the isosmotic solution (open bars) or to 500 mOsm/kg for the hyperosmotic solution (dashed bars) by addition of mannitol.The Na + -ATPase activity was calculated from the difference between the ATPase activity in the absence and in the presence of 2 mM furosemide, both in the presence of 1 mM ouabain.The (Na + +K + )ATPase activity was calculated from the difference between the ATPase activity in the absence and in the presence of ouabain.
2,402.4
2000-09-01T00:00:00.000
[ "Biology" ]
Controlling the Spatial Direction of Hydrothermally Grown Rutile TiO 2 Nanocrystals by the Orientation of Seed Crystals Hydrothermally grown TiO2 nanorods are a key material for several electronic applications. Due to its anisotropic crystal structure, the electronic properties of this semiconductor depend on the crystallographic direction. Consequently, it is important to control the crystal orientation to optimize charge carrier pathways. So far, the growth on common polycrystalline films such as fluorine tin oxide (FTO) results in randomly distributed growth directions. In this paper, we demonstrate the ability to control the growth direction of rutile TiO2 nanocrystals via the orientation of the seed crystals. The control of the orientation of such nanocrystals is an important tool to adjust the electronic, mechanical, and chemical properties of nanocrystalline films. We show that each employed macroscopic seed crystal provides the growth of parallel nanofingers along the [001] direction under specific angles. The parallel growth of these nanofingers leads to mesocrystalline films whose thickness and surface structure depends on the crystal orientation of the seed crystal. In particular, the structure of the films is closely linked with the known inner structure of hydrothermally grown rutile TiO2 nanorods on FTO. Additionally, comprehensive 1D structures on macroscopic single-crystals are generated by branching processes. These branched nanocrystals form expanded 2D defect planes, which provide the opportunity of defect doping-induced two-dimensional electronic systems (2DES). Previously, we found that the fine structure results from crystal defects in the early growth state and propagates throughout the growth [27].The additional grain boundaries affect not only the chemical stability but also the electronic properties such as charge carrier mobility [28][29][30][31].Within a nanorod, these nanofingers merge and form a single crystal for sufficiently high annealing temperatures [32]. Besides the fine structure, another important effect occurs, which offers great opportunities for electronic applications.The hydrothermal growth of rutile TiO 2 nanorods is accompanied by branching events in the early growth stage.In as-grown nanorods, the principal crystal and the branch are separated by a thin 2D defect plane.The existence of flat defect planes in crystals gives rise for specific electronic applications.Such a defect layer represents a local change of the stoichiometry and a shift of the conduction band.Density functional theory (DFT) calculations performed by Morgan and Watson indicate a low formation energy of oxygen vacancies on specific interfaces [33].Dependent on the kind of defect levels induced by such vacancies, the defect plane has either an n-n − -n or n-p-n type character [33,34]. In this study, we demonstrate that the growth direction and orientation is controllable via the crystallographic orientation of the seed crystal.Specifically, we performed the nanorod growth on macroscopic rutile TiO 2 single crystals with defined crystal facets resulting in a similar fine structure as observed in the nanorods grown on polycrystalline seed films.Here, the orientation of the nanofingers along the [001] direction is used as an indicator for the crystallographic direction of the grown mesocrystalline films [21,27].Besides the inner structure, nanorods and the presented mesocrystalline films share another feature.Branching, as typically observed for rutile TiO2 nanorods, [35] appear on the mesocrystalline films as well as provide expanded two-dimensional electron systems (2DES), [36] which have the potential for sensing and transistor applications [37,38]. Materials and Methods The hydrothermal growth was performed on commercial rutile TiO 2 single crystals (Latech Scientific Supply Pte. Ltd., Singapore) with polished {100}, {001}, {110}, and {111} facets fabricated by a float zone crystal growth method.For all crystals, the rms-roughness is less than 1 nm and the purity is above 99.99%.The hydrothermal growth was performed by heating a 20 mL of hydrochloric acid (HCl, reagent grade, VWR Chemicals, 14.8wt% concentration in distilled water) and 350 µL titanium(IV) butoxide (C 16 H 36 O 4 Ti, reagent grade, 97%, Sigma-Aldrich, now Merck KGaA, Darmstadt, Germany) solution in a Teflon lined autoclave at 180 • C. The growth process was stopped after 3 h by rapid quenching in water.After the growth, the samples were split mechanically and the edge was investigated with a scanning electron microscope (SEM).The field-effect scanning electron microscope (FE SEM) imaging was executed with a Zeiss CrossBeam 1540XB (Carl Zeiss Microscopy GmbH, Jena, Germany) using an acceleration voltage of 5 keV.Powdered X-ray diffraction (PXRD) data were acquired on a Bruker AXS D8 Discover (Bruker Corporation, Billerica, USA) with an IµS microfocus X-ray source (Cu-Kα radiation) equipped with a 2D-Detector Vantec500. Results and Discussion The surface of these hydrothermally grown structures, as drawn in Figure 1, consists of a tip with {001} and {111} facets, side walls composed of {110} facets, and the edges directed along the [100] direction.This is attributed to the preferred [001] growth direction.Consequently, it appears possible to control the spatial growth direction by controlling the crystallographic orientation of the seed crystal.Hence, the hydrothermal growth was performed on macroscopic rutile TiO 2 single crystals with {001}, {111}, {110}, and {100} facets.The top view and cross-section images are shown in Figure 1.On each facet, a densely packed nanocrystalline film consisting of 10 to 15 nm thick parallel nanofingers was formed.Similar nanofingers have been observed in hydrothermally grown rutile TiO 2 nanorods on FTO substrates [27].The X-ray diffraction pattern (Figure S1, supporting information) indicates that the grown films consist mainly of rutile TiO 2 as expected.A comparison of the fine structure in the represented nanocrystalline films and typical nanorods is shown in the supplementary information (Figure S2).The nanofingers are aligned with the [001] direction of rutile TiO 2 as determined from the angles between the nanofingers and the crystallographic direction of the substrate.For the substrates with {100} and {110} facets, the main growth direction is parallel to the substrate surface.Consequently, the hydrothermally grown layers remain thin as the growth rate perpendicular to the substrate surface is low.In contrast, the substrates with {001} and {111} facets provide a component of the main growth direction perpendicular to the substrate surface and accordingly, the hydrothermally grown films become thick.This reflects the direction-dependent growth rates that are responsible for the rod-like shape of nanocrystals on common polycrystalline TiO 2 or FTO seed layers. The hydrothermal growth on the {001} facet is shown in Figure 1A,B.On these {001} facets, the nanofingers align normal to the substrate surface, which is parallel to both the main growth direction and the [001] direction of rutile TiO 2 .As the nanofinger growth proceeds, the diameter decreases by approximately 5%.This is attributed to changing process parameters such as temperature and precursor concentration during the growth process [35,39].The decrease in diameter causes cracks in the film as shown in the inset of Figure 1A.Alternatively, the nanofingers include an angle of 45 • on the {111} facets, which corresponds to the angle between the [111] and [001] direction (Figure 1C,D).As demonstrated in the inset of Figure 1C, cracks are in line with the tilted growth direction since breaking up the mesocrystalline film at grain boundaries costs less energy than breaking the nanofingers directly. The growth on {110} (Figure 1E,F) and {100} (Figure 1G,H) facets results in thin films.For the {110} facet, nanofingers are observed, which are aligned in parallel to the substrate surface.Here, the spatial growth direction along the [001] direction is perpendicular to the [110] direction.This outcome is reasonable since {110} facets form the side walls of nanorods.The thin film thickness results from a slow growth perpendicular to the main growth direction (Figure 1F).The growth on {100} facets results in a thin film with a ribbed surface.The {100} facets are found at the vertical edges of the nanorods.The ribbed surface is attributed to the growth along the face of the substrate, resulting in the edges of the nanofingers being exposed.The high density of parallel gables leads to the distinctive rough structure.Summarizing the presented observations on all four different crystal facets, we can say that the growth direction of rutile TiO 2 nanocrystals is strictly correlated with the crystal orientation of the subjacent rutile TiO 2 seed. In addition to the seed crystal-dependent growth, the nanorods also exhibit branching, which results in further, uncontrolled growth directions as demonstrated in Figure 2A for {100} facets.Since the preferred growth direction on {100} facets is parallel to the surface, this phenomenon is strongly related to the branching of single rutile nanorods [35].A similar branching event observed on a single rutile nanorod, which was created in solution without any expanded seed layer, is presented in the inset of Figure 2A.The inclination angle of 65 • corresponds to the presence of a {101} twin plane at the interface between the row of nanorods and the surface of the single crystal [35].We suppose that the branching is correlated to ruptures on {100} facets as presented in Figure 2B.If these expanded crystal defects form an intersection line with the crystal surface, a straight 1D surface defect is created.A rupture on a crystal surface provides additional exposed crystal facets with different crystallographic orientations.Such facets support branching and hence, the creation of the nanorod wall shown in Figure 2A.Yang et al. and Zhou et al. reported about similar wall-like structures grown at the side wall of a primary single freestanding TiO 2 nanorod [40,41].If surface defects on the substrates are responsible for the emergence of the observed branches, they can also be generated selectively and one can profit from the local properties of such extended nanostructures.These structures provide a bunch of important features.Twin planes between rutile nanocrystals have a stoichiometric composition that differs from defect-free TiO 2 .This affects the band structure at these interfaces resulting in more n-or p-type two-dimensional layers as required for thin film transistors.Beside local electronic applications, a nanocrystalline surface supports the attachment of different kinds of particles.Specific body cells are known to form a strong contact with TiO 2 nanostructures and hence, these structures are suitable for applications in biological research and lab-on-a-chip devices [25].The versatility of the applications can be significantly extended by applying a topcoat.For example, the V-shaped row of nanorods as shown in Figure 2A covered with a metallic coating serves as a waveguide for electromagnetic waves similar to the channel plasmon-polariton (CPP) geometry [42]. devices.[25] The versatility of the applications can be significantly extended by applying a topcoat.For example, the V-shaped row of nanorods as shown in Figure 2A covered with a metallic coating serves as a waveguide for electromagnetic waves similar to the channel plasmon-polariton (CPP) geometry.[42] Figure 1.Top view (A,C,E,G) and cross-section (B,D,F,H) SEM images of hydrothermally grown TiO2 structures on rutile single crystals with {001}, {111}, {110}, and {100} facets.The schematic drawing makes up a relationship between these structures and the observed facets on a common rutile TiO2 nanorod: Growth on facets perpendicular to the {001} facet is expected to be much less pronounced, which is in good agreement with the cross-section images.Growth on the {110} facet, which corresponds to the flat side walls of a nanorod results in a flat film.In contrast, growth on the {100} facet, which corresponds to the edges of a nanorod results in dense parallel gables.The principal growth direction ([001]) is marked with a blue arrow.A,C,E,G) and cross-section (B,D,F,H) SEM images of hydrothermally grown TiO 2 structures on rutile single crystals with {001}, {111}, {110}, and {100} facets.The schematic drawing makes up a relationship between these structures and the observed facets on a common rutile TiO 2 nanorod: Growth on facets perpendicular to the {001} facet is expected to be much less pronounced, which is in good agreement with the cross-section images.Growth on the {110} facet, which corresponds to the flat side walls of a nanorod results in a flat film.In contrast, growth on the {100} facet, which corresponds to the edges of a nanorod results in dense parallel gables.The principal growth direction ([001]) is marked with a blue arrow. Conclusions We investigated the hydrothermal growth of rutile TiO2 nanocrystals on macroscopic rutile TiO2 {100}, {110}, {111}, and {001} facets.Macroscopic facets provide the growth of dense mesocrystalline rutile TiO2 films consisting of thin nanofingers.These nanofingers are aligned in parallel with the [001] direction and indicate the spatial growth direction.If the seed crystal does not exhibit a {001} facet, the spatial growth direction is tilted towards the seed surface with a well-defined angle.Thus, it is shown that the spatial growth direction is well controlled with the orientation of the employed seed crystal.Surface defects on seed crystals might support the appearance of additional spatial growth directions.Hence, an efficient control of the spatial growth direction is based on defect-free seed crystals.Nevertheless, a controlled creation of branches could be used to fabricate 2D layers with technically important electronic properties. Supplementary Materials: The following are available online at www.mdpi.com/xxx/s1, Figure S1: XRD pattern of the hydrothermally grown films on the rutile single crystals., Figure S2: SEM image Conclusions We investigated the hydrothermal growth of rutile TiO 2 nanocrystals on macroscopic rutile TiO 2 {100}, {110}, {111}, and {001} facets.Macroscopic facets provide the growth of dense mesocrystalline rutile TiO 2 films consisting of thin nanofingers.These nanofingers are aligned in parallel with the [001] direction and indicate the spatial growth direction.If the seed crystal does not exhibit a {001} facet, the spatial growth direction is tilted towards the seed surface with a well-defined angle.Thus, it is shown that the spatial growth direction is well controlled with the orientation of the employed seed crystal.Surface defects on seed crystals might support the appearance of additional spatial growth directions.Hence, an efficient control of the spatial growth direction is based on defect-free seed crystals.Nevertheless, a controlled creation of branches could be used to fabricate 2D layers with technically important electronic properties. Figure 1 . Figure 1.Top view (A,C,E,G) and cross-section (B,D,F,H) SEM images of hydrothermally grown TiO 2 structures on rutile single crystals with {001}, {111}, {110}, and {100} facets.The schematic drawing makes up a relationship between these structures and the observed facets on a common rutile TiO 2 nanorod: Growth on facets perpendicular to the {001} facet is expected to be much less pronounced, which is in good agreement with the cross-section images.Growth on the {110} facet, which corresponds to the flat side walls of a nanorod results in a flat film.In contrast, growth on the {100} facet, which corresponds to the edges of a nanorod results in dense parallel gables.The principal growth direction ([001]) is marked with a blue arrow. Figure 2 . Figure2.A) Scanning electron microscope (SEM) cross-section image of an extended branching site on a {100} single rutile crystal.The inset shows an individual branched nanorod, which was created in the solution in the absence of any macroscopic seed substrate.B) SEM cross-section image of an extended branching row, where the branches were removed with sonication.The blue line marks the interface between the single-crystalline rutile TiO2 {001} substrate and the grown TiO2 layer.The red cycle marks a rupture inside the substrate that is supposed to be the origin of the branching.The principal growth direction ([001]) is marked with a blue arrow.Inset: Schematic drawing of the hydrothermal growth on {100} single crystals including a double branching event resulting in expanded nanorod walls. Figure 2 . Figure 2. (A) Scanning electron microscope (SEM) cross-section image of an extended branching site on a {100} single rutile crystal.The inset shows an individual branched nanorod, which was created in the solution in the absence of any macroscopic seed substrate.(B) SEM cross-section image of an extended branching row, where the branches were removed with sonication.The blue line marks the interface between the single-crystalline rutile TiO 2 {001} substrate and the grown TiO 2 layer.The red cycle marks a rupture inside the substrate that is supposed to be the origin of the branching.The principal growth direction ([001]) is marked with a blue arrow.Inset: Schematic drawing of the hydrothermal growth on {100} single crystals including a double branching event resulting in expanded nanorod walls.
3,644.2
2019-01-26T00:00:00.000
[ "Materials Science" ]
A framework for auralization of boundary element method simulations including source and receiver directivity U SIR is a digi t al collec tion of t h e r e s e a r c h ou t p u t of t h e U nive r si ty of S alford. Whe r e copyrigh t p e r mi t s, full t ex t m a t e ri al h eld in t h e r e posi to ry is m a d e fre ely availabl e online a n d c a n b e r e a d , dow nloa d e d a n d copied for no nco m m e rcial p riva t e s t u dy o r r e s e a r c h p u r pos e s . Ple a s e c h e ck t h e m a n u sc rip t for a ny fu r t h e r copyrig h t r e s t ric tions. I. INTRODUCTION Since its inception 25 years ago, 1 auralization has become an important tool for acoustic engineers to communicate the sonic benefits of designs to stakeholders-this is particularly commonplace in architectural and automotive applications. Historically, auralizations were created by playing and recording audio inside physical scale models, 2 but as technology has advanced, the simulation is now mostly performed using computer models. In order for such an auralization to be accurate, the numerical predictions on which it is based, and the spatial audio encoding and rendering processes used to present it to the listener, must all be accurate too. To date, the majority of simulations for auralization have been conducted using Geometrical Acoustics (GA), 3 for which the spatial audio encoding aspects are straightforward; for example, a ray may be mapped to the closest direction in a Head-Related-Transfer-Function (HRTF) set, 4 or panned between nearby loudspeakers in an array. It is, however, known that these prediction algorithms are inaccurate in certain circumstances, especially at lower frequencies or in smaller rooms, and/ or in cases where diffraction or interference effects are significant. 5 Algorithms that model wave effects fully, 6 such as Finite Element Method (FEM), Boundary Element Method (BEM), and Finite-Difference Time-Domain (FDTD), are more accurate and reliable, but processes for encoding their output for auralization are more involved and less well established. A common approach has been to include a head and torso, 7 or an idealized equivalent, 4,8,9 in the model geometry so that binaural output data can be directly generated by placing receivers at the ear locations. This approach is valid but is inflexible since it fixes the listener position and does not allow for inclusion of personalized HRTFs. A more flexible approach is to encode the sound-field around the receiver as a weighted sum of spherical harmonic or plane waves. This approach is widely accepted by the sound-field rendering community as an appropriate encoding format for both loudspeaker array-based and binaural reproduction systems, 10,11 and has the added benefit from a prediction-algorithm verification perspective of separating validation of the prediction and rendering processes. It is also consistent with an equivalent representation at high frequencies 12 and, noting that a similar approach may be used to encode source directivity, leads to point-to-point room transfer functions being thought of as having multiple input and output channels. 13 Encoding to such a format from BEM, FEM, or FDTD has to date been achieved by simulating some type of microphone array. [14][15][16] Encoding of this data is, however, not straightforward and is constrained by many of the factors that affect real microphone array design, with tradeoffs having to be made between array size and density and encoding accuracy. The only exception to this encoding approach is the 2014 method of Mehra et al. 17 that computes the spherical harmonic coefficients from high-order spatial derivatives of the pressure field. When a boundary integral is used to compute the pressure field in the domain, as is in principle applicable to FEM and FDTD but is best suited to BEM, these spatial derivatives may be achieved by taking spatial derivatives of the kernel of the integral (i.e., the Green's function). This means that the coefficients may be found directly by a mapping from the boundary data. Since this mapping is independent of the actual boundary data-it only depends on the receiver location-it may be pre-computed to allow interactive update of the scene data e.g., due to changes in the source. The encoding method applied in this paper achieves the same functionality as the method of Mehra et al. 17 but differs in its mathematical formulation and derivation. In particular, the mathematical formulation 18 is derived from orthogonality statements for "spherical harmonic basis functions" (defined in Sec. II) and gives closed-form statements for the integrals to be evaluated to compute each coefficient, whereas that of Mehra et al. involves evaluating a larger number of integrals and then performing a triple summation to obtain the matrix that maps from boundary data to receiver coefficients. The approach herein may be considered a generalization of the array designs of Hulsebos et al. 19 that is applicable for arbitrary array geometries. These are "open" array designs and are unusual in that they require both pressure and the surface-normal component of pressure gradient at each sensor. Since the array geometry may be chosen freely, it may be taken to be the boundary of the room, at which the pressure and its surface-normal gradient are already known. Compared to Ref. 17, this paper includes substantially more objective validation results and encodes the HRTF datasets in a manner that considers the measurement radius, whereas Mehra et al. 17 appear to transform the simulation results into a set of plane-wave amplitudes and use the HRTF data directly as if it were measured in the far-field. The accuracy of the encoding by either of these approaches will be dictated by the resolution of the boundary mesh, in the same way that for BEM it dictates the accuracy of the sound-field calculated in the domain in general. This has the benefit that there are no parameters or design tradeoffs to be decided by the user and, unlike the mic-array-based encoding methods above, no regularized matrix inversion is required. A. Hybrid simulation algorithms An oft-stated limitation of FEM, BEM, and FDTD in room acoustic applications is the rate at which their computational cost increases with frequency. BEM only requires meshing of the two-dimensional boundary, hence to maintain accuracy as frequency f increases the number of Degrees Of Freedom (#DOF) must grow with Oðf 2 Þ, compared to Oðf 3 Þ for FEM and FDTD that discretize the domain. However, BEM produces full interaction matrices, linking every element to every other element, leading to computational cost and storage requirements that scale Oðf 4 Þ. This has traditionally made it less efficient than FEM or FDTD in most scenarios, 20 primarily finding application in computing scattering from small objects under anechoic conditions, 21 but modern matrix compression techniques such as fast-multipole 22 and adaptive-cross-approximation 23 can significantly compress the matrices, making BEM competitive in many more scenarios. Even with such developments, however, the scaling of computation cost and storage with frequency for these algorithms is still sufficiently unfavorable so as to preclude full audible-bandwidth simulation for most realistic-sized room acoustics problems of interest. Auralization of a space, however, requires measured or simulated data covering the full audible frequency spectrum. Since geometrical acoustics algorithms are inaccurate at low frequencies and FEM, BEM, and FDTD are prohibitively computationally expensive at high frequencies, the only way to currently meet the requirement is to combine the output data or two or more algorithms, each run on a section of the frequency spectrum to which they are more suited. This approach was first pioneered by Granier et al. 4 in the mid-1990s using FEM and geometrical acoustics, and the same combination has been studied between 2009 and 2014 in Refs. 7 and 24-27, and by G omez et al. 15 and Tafur et al. 8 in 2017. BEM was used as the low frequency method by Summers et al. 9 in 2004 and FDTD was used for lower frequency bands in a multiband framework proposed by Southern et al. 28 in 2013. Mehra et al. 17 also used BEM to compute results for auralization in 2014, but extrapolated results to higher frequencies rather than combining them with those of a geometrical acoustics algorithm. While pragmatic, this hybrid approach opens up another question: how the results from the two algorithms should be combined. It was obvious from the earliest attempts that some form of crossover filter was required between the two models, 4 and that the design of this, e.g., filter lag, 9 would have an effect on the combined Room Impulse Response (RIR) generated. Another issue is how the filtered transfer functions from the two algorithms will interfere with one another once they are combined; Aretz et al. 24 considered this in the most depth and proposed two crossover methods aimed at addressing different concerns. This paper circumvents those issues and questions by only presenting and assessing the accuracy of the BEMsimulated part of the solution. For frequency-domain results this is straightforward-only the relevant frequency range will be presented-but for time-domain results, a low-pass filter will be employed to minimize Gibbs artefacts; these will be validated against equivalently low-pass filtered measured data. This means that the results herein could be readily combined with high-pass filtered geometrical acoustics results to form a hybrid scheme, so are representative of what the method's performance would be in such a case without opening up questions pertaining to the accuracy of the high-frequency algorithm or combining approach. B. Input data Uncertainties, inaccuracy, unsuitability, or presence of gaps in input data is widely acknowledged to be a factor that significantly constrains the accuracy of room acoustic simulations. 5 When dealing with FEM and BEM, for which error bounds can be quantified and which usually produce accurate results for a defined problem if applied correctly, it is reasonable to state that error in input data is the main source of error in output data. For room acoustics simulations as considered herein, the input data comprises: (i) the geometry of the space and source and receiver locations, (ii) suitable data characterizing the materials present in the space, and (iii) data characterizing the source directivity. For the simulations presented herein, this data was drawn from the Ground truth for Room Acoustical Simulation (GRAS) database 29,30 that was created for the 1st International Round Robin on Auralisation. This database provides high resolution input and output data with the aim of allowing the performance of simulation algorithms to be assessed and improved. Crucially, for the purpose of validating the main contribution of the paper, being an application of the sound-field encoding technique from Ref. 18, it includes measured Binaural RIRs (BRIRs) plus a detailed HRTF set for the Head-And-Torso-Simulator (HATS) used to acquire them. Real acoustic sources have complex frequencydependent directivities, but the vast majority of FEM, BEM, or FDTD simulations use simple monopole directivity and it is uncommon to see anything more complicated than a dipole. Part of the reason for this is that it is not trivial to implement higher-order sources in an algorithm that discretizes the domain, though higher-order multipoles have been attempted. 31 A directional source model very similar to that used herein was implemented in FDTD by initializing a wave in the grid, 32,33 but its details appear significantly more complex than that proposed here and the proximity of the source to boundaries is presumably limited. Source strength calibration is in general also non-trivial with FDTD; much of the detail in the multiband framework of Southern et al. 28 is concerned with achieving this. An alternative approach, as implemented herein and in Ref. 17, is to state the incident pressure field analytically and just compute the scattering using the numerical model. This is standard practice in BEM, 34 but is also possible for FEM and FDTD. It circumvents issues to do with the complexity and singular nature of the incident pressure field near the source because the equations for this are only ever evaluated numerically at boundaries, which are assumed to be some distance away. There remains potential for difficulties if a high order source approached a boundary-in this case, the mesh would need to be locally refined to deal with the more spatially concentrated pressure fluctuations-but in most cases, this should not be necessary. C. Overview of this paper The primary contribution of this paper is demonstration of the spatial audio encoding process proposed in Ref. 18. The secondary contributions are to demonstrate the effectiveness and accuracy of the full processing chain, including source directivity, BEM simulation, and binaural encoding. Section II presents the mathematical theory behind the source and receiver models and how they are interfaced to BEM. Section III gives more specific implementation details on how they were applied to the dataset used. Section IV presents results validating the simulations against measurements from the GRAS database, then Sec. V draws conclusions and discusses avenues for future research. II. THEORY This paper will assume that the medium of wave propagation, the air in the room, is linear, homogeneous, and isotropic, with frequency and position invariant wave speed c 0 and density q 0 . Real-valued acoustic pressure perturbations uðx; tÞ, where x is a point in three-dimensional (3D) Cartesian space and t is time, obey the linear acoustic wave equation r 2 u ¼ c À2 0 @ 2 u=@t 2 . Uðx; xÞ is the complex-valued Fourier transform of u, where x ¼ 2pf is angular frequency in radians per second and f is frequency in Hz, which satisfies the Helmholtz equation r 2 U þ k 2 U ¼ 0, where k ¼ x=c 0 is the wavenumber in radians per meter. In this paper, e Àixt time dependence is assumed for the inverse Fourier transform, that is, a frequency component Uðx; xÞ would produce a time-dependent pressure field RealfUðx; xÞe Àixt g. The majority of this paper will be written in terms of the latter quantity U, since the source and receiver descriptions are more easily stated as functions of k and the BEM algorithm used was a frequency-domain code that solves the Helmholtz equation. The measured source data and desired output data for auralization is, however, all time-domain, so the processing necessarily begins and ends with forward and inverse Fourier transforms, respectively, implemented in practice using The Fast Fourier Transform (FFT) algorithm. A. Source and receiver models This paper will consider sources and receivers that are spatially compact, so may reasonably be considered as centered on a point in space; these will be denoted x s and x r , respectively. The mathematical descriptions of the pressure fields in the vicinity of each point will be based in a spherical coordinate system ðr; a; bÞ. centered on that location, with radius r and azimuthal and zenith angles a and b, respectively. In these coordinate systems, the pressure of waves that satisfy the Helmholtz equation at frequency x [with the exception of Eq. (1) at x ¼ x s ] may be represented in the neighborhood of a x s and x r by 10,35,36 Here, U inc is defined to be the incident pressure arriving from some source under anechoic conditions and U total is the total pressure including reflections too. Equation (1) is valid when a source is present at x s , and Eq. (2) is valid and will converge 22 for an expansion point x r that is not too close to a source or boundary. A m;n and B m;n are sets of complex frequency-dependent coefficients whose values depend on the pressure field being represented. It is intended that the B m;n coefficients are "input data" arising from the encoding of the directional frequency response of some source (see Sec. III B) and that the A m;n coefficients are the output data of this simulation process, being the computed total pressure field encoded as directional coefficients relative to the receiver position ready for presentation by an auralization system. They may therefore respectively be used to represent the directional nature of a source and the directional nature of sound arriving at a receiver. Note that the inevitable scattering by the receiver is not included in Eq. (2); this mechanism is included either implicitly within HRTFs or physically if the sound is rendered to a listener over loudspeakers. The upper limits in n, termed O r and O s , are often terms the "order" of the expansion and the number of A m;n and B m;n coefficients is given by N r ¼ ðO r þ 1Þ 2 and N s ¼ ðO s þ 1Þ 2 , respectively. The functions H out m;n ðrÞ and J m;n ðrÞ, plus another H in m;n ðrÞ that will be required later, are defined H out m;n ðrÞ ¼ Y m n ðb; aÞh out n ðkrÞ; H in m;n ðrÞ ¼ Y m n ðb; aÞh in n ðkrÞ; (4) J m;n ðrÞ ¼ Y m n ðb; aÞj n ðkrÞ: Here, h out n and h in n are spherical Hankel functions of order n that are "outgoing" and "incoming," respectively; with e Àixt time dependence, as assumed herein, they will be of the first and second kind respectively. j n ðkrÞ ¼ 1 2 h out n ðkrÞ þ 1 2 h in n ðkrÞ is a spherical Bessel function. Y m n ðb; aÞ is a spherical harmonic function of order m; n. A number of marginally different normalization schemes exists, but in this paper they are defined 22 Here, P m n ðÁ Á ÁÞ is an associated Legendre polynomial. Spherical harmonic functions are also used to interpolate source directivities and HRTFs in geometrical methods at high frequencies. 12,37 There, the radial propagation is assumed to match that of a monopole regardless of n, so directivity becomes independent of r; this is equivalent to replacing all h out n in Eq. (3) by h out 0 , which is appropriate since h out n ðkrÞ % i Àn h out 0 ðkrÞ for large kr. 38 Representations like Eqs. (1) and (2) are in contrast used when distance is considered important, e.g., for near-field compensated Ambisonics 10 or HRTF range extrapolation. 39 Directivity is usually measured over a sphere, and data measured in this way may be encoded as A m;n and B m;n coefficients so long as the measurement radius is known. Alternatively, techniques that use double-layer array measurements may obtain B m;n coefficients directly. 36,40,41 B. Boundary integral equations The Kirchhoff-Helmholtz Boundary Integral Equation (KHBIE) is found by applying Green's second theorem to a pair of acoustic waves over some domain X þ that contains the acoustic medium. 34 One of the waves is U, the pressure-field under study, which satisfies the Helmholtz equation everywhere in X þ . The other is the free-space acoustic Green's function Gðx; yÞ ¼ e ikjxÀyj =4pjx À yj, which satisfies the Helmholtz equation for all y 2 X þ x. The result is a surface integral over the boundary C that contains X þ . For a scattering problem with U total ¼ U inc þ U scat , where U inc is the aforementioned pressure-field radiated by the source in anechoic conditions and U scat is the difference that occurs due to reflections from the boundary, this may be expressed as Here, the notation @=@n y is shorthand forn y Á r y , wheren y is a unit vector pointing normal to C and into X þ at point y 2 C, and subscript y means "with respect to or evaluated at point y." Equation (7) is the basis of our BEM formulation. Total pressure U total should equal zero in a domain X À that is on the opposite side of C to X þ , hence U scat ðxÞ ¼ ÀU inc ðxÞ for x 2 X À . Taking the limit as x approaches C from within X À produces an inhomogeneous Fredholm integral equation of the second kind. This may be solved by discretizing the boundary quantities U total and @U total =@n y on a boundary mesh and then solving the resulting matrix equation numerically. More details on this process are given in Sec. III C. Consider now the simulation process architecture shown in Fig. 1. The reader is encouraged to notice the similarities between this framework and the high-frequency geometrical acoustics framework given in Fig. 2 Source rotation, (3) Source to boundary U inc mapping, (4) Boundary to boundary U inc to U total mapping-BEM solution, (5) Boundary to receiver U scat mapping, (6) Source to receiver U inc mapping, (7) Receiver rotation, (8) HRTFs. 8 are, respectively, the encoding of measured source directivity as B m;n coefficients following Eq. (1) and of HRTFs as A m;n coefficients following Eq. (2). Blocks 2 and 7 are rotations of these, and allow for changes in source and receiver orientation; this may be readily achieved in the Spherical Harmonic domain by a matrix multiplication. 22 Block 3 is simply the evaluation of Eq. (1) for points on the boundary and block 4 is the BEM solution. This leaves processes for blocks 5 and 6 to be identified. Note that these each separately encode U inc and U scat in the form of Eq. (2) as separate sets of coefficients A inc m;n and A scat m;n that are then summed. A solution to implementing the process in block 6 and find A inc m;n is in fact well known; it may be achieved in a straightforward way using a translation operator in the Spherical Harmonic domain. 22 How to achieve the process in block 5, mapping boundary pressure in a BEM model to scattered pressure encoded as coefficients A scat m;n , is not as well established, with simulation of virtual mics arrays previously being the norm as discussed in Sec. I and the direct approach of Mehra et al. 17 being the state of the art. In this paper, the alternative direct approach by Hargreaves and Lam 18 is implemented. This allows A scat m;n to be found by evaluation of the following boundary integral equation, 42 where a bar over a quantity indicates conjugation Equation (8) possesses clear similarities to the KHBIE in Eq. (7). Noting in particular that H in 0;0 ðy À x r Þ ¼ Gðx r ; yÞ Â ffiffiffiffiffi ffi 4p p =ik and that J m;n ð0Þ ¼ 1= , it is apparent that Eq. (7) is in fact a special case of Eq. (8) for m ¼ n ¼ 0. Equation (8) may be implemented numerically in a similar manner to how Eq. (7) is for omnidirectional external receivers. H in m;n contains a higher-order singularity than G for n > 1, and will also contain angular oscillations that G does not, but both of these characteristics should be resolved well by standard quadrature techniques on a mesh that is fine with respect to wavelength, so long as receivers are not too close to a boundary and O r is not unnecessarily high at low frequencies. More detail is given in Sec. III C. Figure 1 also displays the #DOF present at the interfaces between the various processes; for most the computational cost of each process will scale with the product of these, though for a direct implementation of process 6, it scales an order of magnitude worse. 22 In practice, however, processes 3, 4, and 5 involving N e , the #DOF in the BEM mesh, will dominate simply because N e is usually a few orders of magnitude greater than N s or N r ; for the case studies considered herein N s ¼ 25 and N r ¼ 121, whereas N e was typically in the order of the tens of thousands. Process 4, the BEM solution, is therefore expected to be the most computationally intensive, since its computation cost is proportional to N 2 e . It will be seen from the test cases that this was, however, not the case; the libraries used to evaluate this stage are the most optimized and, combined with the Adaptive-Cross-Approximation (ACA) solver, 23 this stage is neither the slowest nor the one with the worst computational cost scaling. Values of N s , N e , and N r are necessary to maintain accuracy as frequency increases all scale Oðf 2 Þ, so the total computational cost of the algorithm is expected to scale Oðf 4 Þ. III. IMPLEMENTATION The above framework was implemented for a subset of the scenes from the GRAS database. 29,30 The database contains 11 scenes, some with multiple variants, with seven being fully or hemi anechoic laboratory setups and the remaining four being room acoustic scenarios. In this paper, results for the following three scenes are presented: Scene 1 actually included two other variants; one with a mineral wool slab placed on the floor and another with a Medium Density Fiberboard (MDF) diffuser. These cases have also been attempted with reasonable success but are not reported here due to space limitations. It should be noted, however, that they are numerically challenging in BEM primarily due to the extreme aspect ratio of the samples, 43 requiring higher element counts and more accurate numerical integration than would normally be expected. The hard floor variant is included since its implementation is essentially image source with directivity, i.e., no BEM mesh, so it allows the accuracy of the source representation and encoding process to be independently quantified. Scene 3 comprised two 2 m square 25 mm thick MDF panels separated by a distance of 10 m, with a source and receiver location both located on the center line between these each spaced 3 m from one of the panels. This is an interesting configuration to simulate, since it will give a flutter echo the damping of which is dictated as much by diffraction as by material absorption. 44 Scene 3 is also a good test case for binaural reproduction, since the HATS was orientated so that reflections occur from ear to ear across the head. Scene 9 was a small seminar room with "relatively simple and easy to describe geometry, but challenging low frequency behavior," so it presents another interesting case for which to apply BEM simulation. It also typifies the input data challenges that a user of BEM encounters in reality, so it is included to demonstrate how those affect simulation accuracy. The room was nominally 8.5 m long by 6 m wide by 3 m high, though it has various inclusions; for details, see Ref. 30. Material data was provided in the form of third-octave absorption coefficients that were established through a mixture of in situ measurement and estimation based on published database values, 29 the latter being required because the former is known to be inaccurate at low frequencies 45 and for materials with low absorption. The result was that the provided data both did not extend down to the lowest frequencies that were simulated and tended to unrealistically small absorption values in this range; if not addressed, this would have led to very low modal damping and unrealistically high sound pressure levels (SPLs). Anecdotally, we were informed that the seminar room had some lightweight panel walls, and it is likely that full-panel membrane motion, which is a significant source of absorption and modal damping at low frequencies, was present but not characterized by this data. Moreover, a door was present in the room and will likely give high losses at low frequencies due to transmission, but no material data was provided for it. A process of extrapolation, modification and fabrication was therefore required to achieve an appropriate and realistic set of boundary data; this is briefly outlined in Sec. III E. The RIRs were measured with a bespoke multiway dodecahedron loudspeaker, 30 but the directional narrowband response data available for the other loudspeakers was not available for this, adding another source of uncertainty. Temperature and humidity data was provided for each scene in the database, but it was not straightforward to use this since there was often a slight mismatch between the conditions when the source and HATS directivities were measured and those when the full scene was measured. Consequently, c 0 ¼ 343m=s and q 0 ¼ 1:21kg=m 3 were assumed for all processes instead. A. Fourier transforms and filtering The simulation process depicted in Fig. 1 will be performed entirely in the frequency domain, as was also the case for all the BEM and FEM algorithms discussed in Sec. I A, meaning that an inverse Fourier transform is required in order to obtain an auralizable IR. It should also be noted that both BEM and FEM also exist as time-domain solvers, [46][47][48] but these are less mature and have not been applied in any hybrid framework to date. When using an Inverse Fast Fourier Transform (IFFT) algorithm to achieve this, results for many frequencies are required at a spacing Df ¼ 1=T, where T is the required IR length; this must be long enough that the IR decays to a negligible level to minimize wrap-around error. For scene 1 it was taken that Df ¼ 2 Hz, so T ¼ 0:5 seconds, and for scenes 3 and 9, where higher order reflections are expected, it was it was taken that Df ¼ 0:5 Hz, so T ¼ 2 seconds. This was, however, found to be insufficient for scene 9, so the room transfer function was spline interpolated in the frequency domain, as suggested by Aretz et al., 24 to give Df ¼ 0:25 Hz and T ¼ 4 seconds. The low-pass filter was chosen to be an 8th order Butterworth filer, following method 1 of Aretz et al., 24 and this was applied in the frequency domain pre-IFFT. Running a BEM at so many closely spaced frequencies is not an efficient application of the algorithm since the interaction matrices must be reconstructed from scratch for each frequency; it is tolerated here for validation purposes. Multifrequency BEM 49,50 provides a solution to this by interpolating the interaction matrices between neighboring frequencies. Values from preceding frequencies were, however, used to "seed" the iterative matrix solver, meaning fewer solver iterations were required. It is also necessary to run the algorithm beyond the intended crossover frequency so that data is available for the frequency region where the filter "rolls-off." Aretz et al. 24 recommended this should be at least half an octave above crossover frequency, but even this appears rather optimistic when considered in terms of typical filter roll-off in dB/octave, and artefacts were reported as being visible in their RIRs. In these simulations, it was chosen that the simulations should extend one full octave above the intended crossover frequency. To mitigate the computational cost that this incurred, the mesh was not refined further beyond the crossover frequency, meaning that computational cost was fixed but accuracy reduced with increasing frequency; this is acceptable since those frequencies will be heavily attenuated by the crossover filter. The crossover frequency was chosen to be 1 kHz for scenes 1 and 3 and 400 Hz for scene 9, hence the maximum frequencies simulated were 2 kHz and 800 Hz, respectively. B. Source directivity and HRTF encoding Three sound sources were used in the scenarios chosen; a Genelec 8020c for scenes 1 and 3, and a QSC K8 and a custom three-way dodecahedron for scene 9, the former being used for BRIRs and the latter for RIRs. The only data available on the dodecahedron loudspeaker was that it aimed to be omnidirectional with a flat frequency response above 40 Hz, so it was assumed to follow this. For the Genelec and QSC loudspeakers extremely high-resolution data was available, being a set of 64 442 impulse responses measured at points on a sphere centered on the source; this was 2 m radius for the 8020c and 8 m for the K8. These were first zero-padded to match the intended output RIR length and then FFT'd to acquire a set of complex frequency-domain transfer functions (units Pa/V); these were taken to be the incident pressure U inc measured at each point. To encode them to a set of coefficients B m;n , one solution is to create a matrix equation to be inverted where each row is Eq. (1) applied at a different measurement point. However, here the number of measurement points was so great that the orthogonality of Y m n over a sphere could be exploited to calculate B m;n directly. For a given frequency, this integral is approximated by a finite sum over measurement angles Here, w p is equal to the area of the sphere that is closest to the pth point. Finally, the coefficients for each source were scaled by a frequency-independent calibration factor so that an SPL of 80 dB was produced at 2 m at 1 kHz, matching the procedure performed for the measurements. 30 The measurement process used to acquire the HRTF library is detailed in Ref. 51. This also takes the form of a set of IRs acquired at different angles, but here they were measured using a loudspeaker mounted on an arc of 1.7 m radius, and for each source location, there are two IRs: left and right. In the absence of directivity information, the source will be assumed to be a monopole; this appears not unreasonable since in Ref. 52 it is validated against BEM simulations performed by reciprocity using a source at the ear and an omnidirectional point receiver at the loudspeaker center. Measured HRTFs were normalized by removing the microphones from the ears and placing them at the origin then repeating the experiment. Fundamentally, HRTFs are linear mappings L and R from the incoming pressure field U total ðxÞ, as exists in the absence of the HATS, to the pressures at the left and right ears U L and U R . Assuming that U total is represented by Eq. (2), these become discrete mappings U L ¼ AL and U R ¼ AR, where A is a row vector containing the elements of A total m;n and L and R are column vectors that are the discrete form of L and R. Including the normalization by the pressure at the HATS center, this amounts to the elements of A being defined for the pth point by A total m;n ¼ 4pY m n ðb p ; a p Þh out n ðkrÞ=h out 0 ðkrÞ, and stacking rows for all the measured points produces a matrix equation to be solved. Since the number of measurement points was so great, it was possible to solve without regularization using a standard least-squares technique. The accuracy achieved by these encoding processes is shown for the Genelec loudspeaker and the HATS in Fig. 2; a similar figure for the QSC K8 is included in the supplementary material. 63 Here, the encoding error is quantified as normalized L 2 error, being the L 2 norm of the residual between the measured data and the encoded version evaluated on the measurement surface, divided by the L 2 norm of the measured data. The surface integrals involved in the L 2 norms were approximated by weighted sums in the same manner as was done in Eq. (9). In both cases the approximation improves with maximum spherical harmonic order O, and a higher value of O is required to maintain the same accuracy as frequency f is increased. The lower right region of the two plots is quite similar, but with slightly smaller residual for the HRTFs. Accuracy deteriorates on the far left of Fig. 2(a) due to the singular nature of spherical Hankel functions at small kr. In contrast, a clear change in the trend-lines is seen in Fig. 2(b) at 200 Hz; it seems likely that this is caused by the transition between measured and simulated data that was necessary when creating the HRTF library. 52 Figure 2 suggests that the optimal values of O s . and O r should change with frequency. This was initially attempted, but sharp transitions between orders were visible in the BEM results, hence this approach was discarded. For simplicity, constant values of O s and O r were instead used for all frequencies; these were chosen to be O s ¼ 4 and O r ¼ 10; hence, N s ¼ 25 and N r ¼ 121. The greater value of O r was chosen to allow the encoding process in Eq. (8) to be tested. C. BEM The BEM simulations were performed using BEMþþ 23,53,54 version 3.1. This is an open-source BEM library that is invoked from Python scripts, creating a flexible interface that allows the boundary integral operators provided to be assembled in customizable ways. BEMþþ implements a Galerkin BEM algorithm in 3D and includes an ACA solver that accelerates matrix assembly and solution. For the solution, the Helmholtz equation, BEMþþ provides four standard boundary integral operators that each map, in a different way, a quantity U defined on a boundary section C to a location x Here, S, D, A and H are, respectively, termed: the singlelayer potential, the double-layer potential, the adjoint double-layer potential, and the hypersingular operator. Note that the definition of H has here been written negated compared to the convention used in the library. Additionally, an identity operator IfUgðxÞ ¼ UðxÞ is also defined. The Python objects representing these operators may be added, multiplied by each other or scalars, or concatenated to form blocked operators, as required by the problem being studied. The discretized versions of these boundary operators, for surface to surface mappings, are found using the Galerkin method. Rather than solving a boundary integral equation (BIE) for a finite set of points on the boundary, as is done in the colocation method, this solves the "weak-form" of the BIE on average over the entire boundary, and hence involves a second surface integral. 34 A set of weighting functions are chosen to spatially weight this "testing" process and produce each row in the matrix equation; in a Galerkin scheme these are equal to the basis functions used to discretize the boundary quantities. An operator K 2 fS; D; A; H; Ig is therefore mapped to its discrete matrix form K 2 fS; D; A; H; Ig with entries given by Here, b j is basis function drawn from the set used to representing the "radiating" quantity, and b i is a basis function drawn from the set used to representing the "receiving" quantity that is being "tested"; these sets are not necessarily the same and may be chosen differently on different boundary sections or for representing a different quantity. A similar statement maps a user-defined Python function f ðxÞ, typically used to compute the incident wave U inc , onto the "receiving" set of basis functions. This produces a vector f with entries defined This interface is extremely flexible, but can be slow for complicated functions, since Python is an interpreter language and the functions are evaluated on a per-abscissa basis. In this scheme for instance, f ðxÞ is typically a Spherical Harmonic basis function (or its spatial derivative), in accordance with the source definition. It will be seen in Sec. IV E that this operation, which is normally assumed to trivially quick compared to assembly and solution of the linear system, actually takes the longest due to the complexity of these functions and the slow nature of native Python code compared to the compiled core libraries that BEMþþ in built on. The KHBIE in Eq. (7) can be written using D and S as U scat ðxÞ ¼ DfU total gðxÞ À Sf@U total =@n y gðxÞ. In order to implement the spherical harmonic encoding in Eq. This allows Eq. (8) to be written as A scat m;n ¼ ikD in m;n fU total gðx r Þ À ikS in m;n f@U total =@n y gðx r Þ. It follows that S in 0;0 ¼ S  ffiffiffiffiffi ffi 4p p =ik and D in 0;0 ¼ D  ffiffiffiffiffi ffi 4p p =ik, hence D in m;n and S in m;n may be viewed as a generalization of the standard operators D and S. The discrete form of D in m;n and S in m;n have matrix entries given by equations similar to Eqs. (16) and (17) but with b j appearing in place of U. These are, unsurprisingly, not built into BEMþþ, but Eq. (15) provides a method to evaluate them. Python functions that compute H in m;n ðy À x r Þ and @H in m;n =@n y ðy À x r Þ may be passed to the routine that implements Eq. (15), and the result is a coefficient from the discrete matrix form of S in m;n and D in m;n , respectively. Note that evaluating this in this way is an extremely slow process; the reasons stated above all still apply, but are compounded by the fact that every spherical harmonic order must be evaluated separately, and the associated Legendre polynomials therein are much more efficiently evaluated simultaneously for all m of an order n rather than separately. For the verification purposes herein, however, this inefficiency is tolerated. The GMRES iterative matrix solver included with SciPy 55 was used to solve the matrix equations produced using the BEMþþ operators. The ACA accuracy and maximum block size parameters, and the GMRES solver tolerance, were left at their default values of 1  10 -3 , 2048 and 1  10 -5 , respectively, for scene 3. For scene 9, the former two were reduced to 1  10 -5 and 128, respectively, due to some convergence issues with matrix solver; this improved accuracy of the matrix approximation at the expense of increased storage and computational cost. Meshing was performed using the opensource meshing tool Gmsh. 56,57 Maximum element size at any given frequency followed k=8, with a minimum limit chosen to match k=8 at the crossover frequency. D. Scene-specific implementation Scene 1H This scene comprised a loudspeaker above a hard floor in a hemi-anechoic chamber; this was assumed to be perfectly reflecting. When applying BEM in half-space problems such as this, it is common to apply an image-source principle and reflect the source and all the obstacle boundaries in the rigid ground plane. Here, there is no additional obstacle, so there is actually no BEM solution at all; there is just the source and the image source, which has reflected directivity. Finding the pressure at a point is as simple as finding U inc from both sources and summing. Similarly, A m;n coefficients may be found by applying the translation techniques in Ref. 22 to each source and then summing. Scene 3 This scene included panels that were much thinner in one dimension than the other two. Following standard BEM practice, 43,58 the obstacle was replaced by one of zero thickness lying on its center line. The material had a specific admittance Y that was uniform over both sides of the panel; under this condition the formulation given in Ref. 58 can be simplified to solving the following pair of coupled BIE for all x 2 C: DfŨ total gðxÞ þ ½ikYS À 1 2 If U total gðxÞ ¼ ÀU inc ðxÞ and ½H þ 1 2 ikYIfŨ total gðxÞ þ ikYAf U total gðxÞ ¼ À@U inc = @n x ðxÞ. These two boundary integral equations were combined as a blocked operator in BEMþþ. Here, U total and U total are respectively the sum of, and the difference between, the pressures acting on the front and back surfaces of the panel. A discontinuous piecewise-constant space was used to discretize U total and to "test" the first equation. A continuous piecewise-linear space was used to discretizeŨ total and to "test" the second equation. This arrangement is consistent with the expectation thatŨ total ! 0 at the edge of the panel 59 and meets the continuity requirements of H. 60 Pressure at receiver locations can be evaluated by U scat ðxÞ ¼ DfŨ total gðxÞ þikYSf U total gðxÞ. Following the same logic used to derive this statement, 61 it can be asserted that the equivalent statement for spherical harmonic coefficients will also hold: A scat m;n ¼ ikD in m;n fŨ total gðxÞ À k 2 YS in m;n f U total gðxÞ. Scene 9 Scene 9 is a standard interior admittance problem, so is simpler in its formulation than scene 3. The boundary integral equation to be solved for all x 2 C is ½D À 1 2 I þikSYðyÞfU total gðxÞ ¼ ÀU inc ðxÞ. The scattered pressure may be found by U scat ðxÞ ¼ ½D þ ikSYðyÞfU total gðxÞ, and equivalently the spherical harmonic coefficients may be found by A scat m;n ¼ ik½D in m;n þ ikS in m;n YðyÞfU total gðx r Þ: Here, the only significant complication is that Y is position dependent so cannot be brought outside S. There are, however, a finite number of materials present in the room, each of which has a uniform value of Y. It is therefore possible to partition the boundary into segments for which Y is uniform and may be brought outside S; this leads to a blocked operator in BEMþþ. U scat and A scat m;n can be found by evaluating the contribution from each material segment separately and then summing the results. E. Including measured material data The boundary data provided in the GRAS database was given as third-octave random-incidence absorption coefficient a. This was converted to an admittance by assuming the material was purely resistive and locally reacting; the latter being a common assumption and the former being acceptable for materials that are fairly hard and reflective, 26 such as the MDF in scene 3. Using this, and applying the "55 degree rule," 62 the specific admittance can be found by This was then interpolated to the required simulation frequencies using a spline fit assuming a to be constant below the lowest band provided (20 Hz). This approach was, however, not considered adequate for some of the materials present in scene 9. In particular, it was expected that some materials, e.g., the glazing and door, would exhibit reactive behavior that gave significant losses at low frequencies, and that this was likely to be largely missing from measured material data because it would be non-locally reacting. Attempts were therefore made to fabricate plausible low-frequency resonant damping effects in their place. For example, the fundamental glazing resonant modal frequency was estimated from a plausible pane size, and the frequency of the coincidence dip visible in the measured data. This was implemented by fitting a mass-spring-damper model and the admittance data produced was combined with that from the measured data using the non-linear crossover technique from Aretz et al. 24 Missing data for the door was drawn from standard tables 62 and proprietary field data, then embellished in a similar way. The result was a more plausible room response at low frequencies. For full details of the approaches applied see the code included in the supplementary material. 63 IV. RESULTS In this section, results are presented to validate the proposed approach. First, the new process in Eq. (8) that encodes the pressure around a receiver will be verified. Then the results for the three case studies are validated against measurement, including BRIRs for scene 3. A. Verification of sound field encoding process Here, the objective is to verify the accuracy of A inc n;m and A scat n;m coefficients. Evaluating a metric on these coefficients would be the ideal way of quantifying this, but this is complicated by the fact that all other known methods of obtaining them have limitations too. [14][15][16] So here instead, the field has been decoded at a set of points in the domain and the values obtained compared to values of U inc and U scat computed directly by Eqs. (2) and (7), respectively. 2427 evaluation points were used, arranged quasi-randomly within a sphere of diameter minð1; kÞ centered on x r . The l 2 norm of the residual was computed and then normalized by the l 2 norm of the "correct" field; both were windowed with a Hanning window centered on x r . The mean and standard deviation were then computed, averaging with respect to frequency and over loudspeaker position for scene 9, to obtain the trends in Fig. 3; error bars indicate 6 one standard deviation. Note that only frequencies above 343 Hz were included in these statistics; below that, the simulated array became smaller w.r.t. k so the accuracy computed by the metric was unrepresentatively good. More detailed results plotted versus frequency are included in the supplementary material. 63 The residual for U inc is shown for scene 3 only since the trends for scenes 1 and 9 were identical. This continues to reduce over the full range of O r investigated; what is seen here is just the effect of adding extra terms to the series in Eq. (2), and it appears the A inc n;m coefficients are calculated accurately for all of these. The standard deviation over frequency is extremely small too. In contrast, the residuals for U scat converge up to some value and then stop; here, decoding accuracy has been limited by error in the A scat n;m coefficients, indicating the accuracy of the encoding process; this is around 0.03% for scene 3 and around 1% for scene 9. These are average values, however, and the error bars indicate that quite a significant variation occurs; the maximum residual post-convergence was 0.1% for scene 3 and 7% for scene 9. It is clear from this that the U scat encoding process in Eq. (8) works quite well for scene 3 but rather less so for scene 9. The reason for this requires further investigation; one possibility is that it occurs because of the sudden change in boundary condition between adjacent materials. The accuracy achieved may still be sufficient for auralization purposes, however; note that the error metric used here is rather stricter than the 'within x dB' criteria that is often used, and it includes phase error. B. Scene 1H The solution of scene 1H did not involve BEM, so it is included as a means of validating the source directivity model in Eq. (1) and encoding process described in Sec. III B. Measured and simulated results are compared in Fig. 4, which is plotted for loudspeaker position 3 from the database (height 2.6 m angled 60 down) and microphone position 1 (height 1.52 m, 4.1 m from loudspeaker horizontally). In both cases, it is seen that agreement is extremely good. Figure 4(a), in which the measured results have been low-pass filtered to match the processing applied to the simulations, shows correct arrival times and polarity, with onset amplitude well matched. Later, the measured result includes a lowfrequency oscillation that is absent from the simulated data. Frequency spectrum agreement in Fig. 4(b) is also good, with measured and simulation within 3 dB except at some notches and around 50-100 Hz, where there is a boost in the measured data; this is likely related to the low-frequency oscillation seen in the measured data in Fig. 4(a). C. Scene 3 Scene 3 is an attractive verification case because its simulation involves BEM but it generated a sparse, physically insightful reflection pattern for both RIR and BRIRs. Figure 5 shows the RIRs. Again, very good agreement can be seen between simulation and measurement, possibly better than for scene 1H in fact. The onset times, phases and amplitudes of the pulses in Fig. 5(a) are all well captured. In Fig. 5(b), the interference pattern resulting from the flutter echo is very well matched up to the crossover frequency 1 kHz, after which the simulation begins to deviate from the measured result, perhaps because the BEM mesh is no longer being refined. The results from 1.5 to 2 kHz are hidden from the plot so the lower frequencies can be seen more clearly. The BRIRs are plotted in Fig. 6. In the time-domain results in Figs. 6(a) and 6(b), an instantaneous SPL scale in dB has been used, so that the decay and relative amplitude of the pulse can be see for longer. Again, the pulse times and amplitudes are well-matched and the pattern of which channel is louder matches and makes physical sense. Reflections 1, 4, 7, 10, and 13 are louder in the right ear that faces the loudspeaker, being the original incident wave and its subsequent reflection back around the system, while others are similarly loud or louder in the left ear. The frequency domain match is less good than was seen in Fig. 5(b); the results can be seen to track each other up to 500 Hz at least. Deviations above that could occur because the simulation process has combined datasets that have been measured at different times under different conditions. It should be mentioned that Fig. 6 hides a detail that the simulated and measured BRIR were negated with respect to one another. Noting, however, that the RIRs in Fig. 5 matched in sign and that the encoding and decoding was verified in Fig. 3 and Fig. 2(b), it seems most likely that this originates from the HRTF dataset. Auralizations for this scene are included in the supplementary material that accompanies this article. 63 D. Scene 9 Scene 9 was included as a more realistic application of the simulation framework. Accuracy is, however, expected to be much poorer than for the other two scenes because: (i) as a closed geometry, the modal and reverberant damping is controlled entirely by boundary absorption mechanisms; (ii) these mechanisms were quite crudely quantified, as discussed in Secs. I B and III E. Results are shown for source position 2, coordinates [0.12, 2.88, 0.72 m], and microphone position 2, coordinates [0.44, À0.15, 0.12 m]; the origin of the coordinate system is roughly the center of the room at floor height. The RIR is shown in the time-domain in Fig. 7(a) using an instantaneous SPL scale in dB. It was not expected that the fine detail would match and it does not, but it can be seen that the general SPL and the decay rate match well. Figure 7(b) shows the same data in the frequency domain, displaying 0-250 Hz since the modal density above this means no discernable features are observable. The general SPL trend is captured quite well up to 170 Hz, with matches between individual modal frequencies identifiable. That some of these peaks match quite well in SPL and bandwidth is impressive, since this is largely dictated by the boundary absorption data, which was heavily extrapolated. BRIR results are not shown since little detail can be discerned graphically, but auralizations for this scene are included in the supplementary material that accompanies this article. 63 E. Computation times Computing times for scenes 3 and 9 are shown in Figs. 8(a) and 8(b), respectively. Here, the main observation is that all trends scale with #DOF (plotted against the righthand axis). This is expected for the "Setup RHS" and the "Receiver encoding" tasks, being processes 3 and 5 in Fig. 1, respectively, but not for "Setup LHS" and "Solve," together being process 4 in Fig. 1. In a conventional BEM, these would scale with #DOF 2 , so it is clear that the ACA solver has achieved significant computational savings. In contrast, "Setup RHS" and "Receiver encoding" would normally be expected to be by far the quickest steps but, due to the aforementioned implementation in as interpreted Python functions, they are much slower than the optimized, compiled core of the library. This should not however be taken as representative of the new source and receiver mapping techniques described herein; it is merely due to implementation compromises and an efficient, compiled implementation of them could be readily achieved. V. CONCLUSIONS AND FURTHER WORK This paper has proposed a framework for low-frequency room acoustic simulation, echoing similar frameworks that have been proposed for geometrical acoustics models at highfrequencies. A key component of this was the mapping proposed by Hargreaves and Lam 18 to encode boundary data to spherical harmonic descriptions of the pressure field around a receiver, and verification data, results, and auralizations using that are provided herein. The full simulation chain was validated using three case studies drawn from the GRAS database, one of which was hemi-anechoic, one fully-anechoic, and one a real room. The simulations were seen to match measurement well for the hemi-anechoic and fully-anechoic cases, but less so for the room; this was expected since standardized means of quantifying boundary material data are not sufficient for the simulation algorithms brought to bear. Clearly, this latter aspect is a limiting factor in the simulation chain, and one that must be addressed if room acoustic simulations are to move from being plausible to reliably physically accurate. In terms of future work, it is clear that the current implementation of the new source and receiver mappings is extremely inefficient, and an optimized compiled version would be required for serious usage. It is also clear that repeated use of a frequency-domain BEM code followed by IFFT is not an efficient way of generating an impulse response; convolution quadrature 46 or multifrequency 49,50 approaches would be far more efficient. More research is required to set bounds on the accuracy of the new pressure field encoding process, since this was seen to vary with the problem modelled. Finally, these simulations have shown that numerical models can closely match reality when the input data is of a good quality but that they deviate when it is not. Hence improved techniques to characterize materials, ideally in situ, are required.
13,573
2019-04-30T00:00:00.000
[ "Physics" ]
Trifluoroacetylation of Alcohols During NMR Study of Compounds with Bicyclo[2.2.1]heptane, Oxabicyclo[3.3.0]octane and Bicyclo[3.3.0]octane Skeleton TFA was added to a solution of a bicyclo[2.2.1]heptane azide-alcohol in CDCl3 to correctly characterize the compound, but during 24 h gave the trifluoro acetylated compound in quantitative yield. NMR spectra of the esterified compound helped us also to correctly attribute the NMR signals to the protons, and also confirmed the identification of the carbon atoms. The study was extended to other 14 compounds containing a primary alcohol group alone or with an ethylene ketal, a δor -lactone group, a primary and a secondary group, two primary and an alkene group and two primary and a secondary alcohol groups on scaffolds containing bicyclo[2.2.1]heptane, oxabicyclo[3.3.0]octane, bicyclo [2.2.1]heptane constrained with a cyclopropane ring and bicyclo[3.3.0]octane fragments. The esterification of all compounds was also quantitative in 24 to 72 h; this helped us to correct attribute the NMR signals to the protons and carbon atoms of the un-esterified compounds by comparison with those of the trifluoro acetylated compounds. A graphical presentation of Hand C-NMR spectra of a few un-esterified and esterified compounds are presented in the paper. 1.Introduction A routine NMR characterization of a compound with a primary alcohol group needed to distinguish between two protons and after adding trifluoroacetic acid (TFA) in the tube we observed the appearance of signals which proved to be those of the trifluoro acetylated compound formed quantitatively after a day. Browsing the literature, we found that alcohols are even quantitative esterified with TFA [1,2]. For example, synthesis of ethyl trifluoroacetate from ethyl alcohol and TFA is also mentioned in patents [3][4][5][6] and journals [7]. Methyl and isopropyl trifluoroacetate was also synthesized from TFA and methanol or isopropanol [8,9]. The esterification of alcohols in TFA as solvent was performed and followed in NMR tube [10][11][12] to observe the transformation of alcohols into the corresponding trifluoroacetates, because the deshielding effect of the trifluoro acetyl group produces a considerable shift of the signal for the proton(s) linked to the hydroxyl group, by comparison with that of the corresponding protons in trifluoroacetate compound, which are moved to lower field. Though an esterification of hydroxyl groups were observed in CDCl3 + TFA during NMR spectra registration, a paper on the trifluoroacetyl esterification during NMR studies was not found in the literature. And the paper is presented as a scholar theme exemplified for primary and secondary alcohol groups linked to different bicyclic and tricyclic scaffolds and in different configurations. Materials and methods The compounds were dissolved in CDCl3, and 1 H, 13 C, 2D-NMR spectra have been done. Then near 0.03 mL TFA were added and the NMR spectra were followed until the esterification approach the end in specified time of reaction. The same spectra have been done in DMSO-d6 for a few of the compounds. C), chemical shifts (δ) are given in ppm relative to TMS as internal standard. Complementary 2D-NMR spectra were done for correct assignment of NMR signals. The numbering of the atoms in the compounds is presented in Figure 1. Cl 13.NMR spectra of the optically active diol compound 15a and of the bis-trifluoroacetylated compound 15c. 14.NMR spectra of the compound 12a and of the bis-trifluoroacetylated compound 12c. The un-symmetric triol 12 was esterified with trifluoroacetic anhydride in TFA (in NMR tube) and the complete esterification proceeded in 4 h. The signals of the bis-triflouroacetylated compound 12c are described below: 1 15.NMR spectra of the ent-Corey compound 11a and of the bis-trifluoroacetylated compound 11c. CH [20]). The compound 11a was not completely esterified, even after 10 days and with a significant excess of TFA. No proton and carbon NMR spectra couldn't be decipherable, as in the case of 10a + TFA. The symmetric triol 11 was esterified with trifluoacetic anhydride in TFA (in NMR tube) and the complete esterification proceeded in 4 h. The signals of the tri-esterified compound 11c are described The influence of the solvent showed the uselessness of this esterification effort, because the NMR signals were of no help to find these signals in the NMR spectra in DMSO-d6. Results and discussions TFA is usual used in NMR of alcohols, amines or thiols, whose signals overlap over those of the protons linked to vicinal carbon atoms and made difficult to calculate the coupling constants and to assign the signals to the corrected protons. Due to its acidity, it moved the labile protons (OH, NH, SH) to lower field, and also suppress their coupling with the protons linked to the carbon atoms vicinal to oxygen, nitrogen or sulfur. In this paper, NMR spectra of the compounds 1-10 and 13-15 have been done in CDCl3 as solvent and for the compounds 11 and 12 (insoluble in CDCl3) in DMSO, followed by addition of TFA in NMR tube. This paper started from this idea that for the correct assignment of the signals for the protons H2 and H5 of the endo-azide 1a (Figure 1), TFA was added; but in spectrum began to appear signals at lower field suggesting that here started the esterification reaction of the compound with TFA. The endo-azide 1a was completely esterified in 24 h to the trifluoroacetate 1b, and this can be observed in both 1 H and 13 C-NMR. In 1 H-NMR, the methylene protons of primary alcohol are deshielded from 3.95 to 4.78ppm, respectively 3.92 to 4.68 ppm. In 13 C-NMR, the chemical shift of carbon atom C8 is deshielded from 60.35 ppm to 66.37 ppm, and the vicinal carbon atom C7 is shielded from 51,74 to 47.88 ppm. And the signals for trifluoroacetate group are present in the spectrum at: 158.21 (COCF3) and 114.55 (COCF3) ppm. A comparison of 1 H and 13 C-NMR spectra of the starting compound 1a and the esterified compound 1b is presented in Figure 2. Starting from this observation, the exo-azide compound 2a was similarly studied by addition of TFA in NMR tube, and the esterification to 2b was also finished in 24 h (Figure 3 and Experimental, 2): Then the study was extended to other compounds with primary alcohol groups (3a-7a), with primary and secondary alcohol groups (8a-12a), a secondary alcohol group alone (13) and a secondary alcohol group in the presence of a secondary allylic alcohol (14)(15), in racemic: 5a, 10a-12a, or optically active form: 1a-4a, 6a-9a, 13-15, usually used in our laboratory, presented in Figure 1, which contain: -a 5-keto group in the bicyclo[2.2.1]heptane skeleton (compounds 3a and 4a) or protected with an ethylene ketal group as in the compound 5a, or in a more constrained structure containing a cyclopropane ring (compound 6a). -a double bond in a bicyclo [3.3.0]octane diol with two primary alcohol groups (compound 10a). -a bicyclo[3.3.0]octane triol containing two primary alcohol groups and a secondary alcohol group linked to C5 carbon atom (compounds 11a) or to C6 carbon atom (12a, with a primary alcohol protected as acetate). -an enone (13) and two diols (14,15) in prostaglandin intermediate, containing a double bond. The 5-keto compounds 3a and 4a were cleanly esterified with TFA in 24 h to the compounds 3b and 4b, and the deshielding of the H8 protons from δ = 4.05 to 4.79 and δ = 3.95 ppm to 4.70 ppm and of the carbon atom C8 from δ = 59.62 ppm to 64.95 ppm by trifluoro acetyl esterification is observed in NMR spectra, as previously. After 3 days (over weekend), the compound 3b was obtained pure ( Figure 4 and Experimental, 3). In the case of the ethylene-ketal protected compound (±)-5a, trifluoroacetic acid esterified the primary alcohol, but in the same time it deprotected the ethylene ketal group giving the compound 3b, as can been observed in 13 C-NMR spectrum where the ketal C5 carbon atom (δ = 113.84 ppm) no more appear in the spectrum, but appear the C5-ketone carbon atom at 216.01 ppm ( Figure 5). The ethylene ketal seems to be transformed into ethylene glycol and ethylene glycol mono-trifluoroacetate in a ratio of 1:1, as can be observed in 1 H-NMR and 13 C-NMR (Experimental 5, were the signals of ethylene glycol and of the mono-trifluoroacetate are evidenced in red color). For the compound 7a, the δ-lactone resists during its quantitative TFA-esterification to the trifluoroacetylated compound 7b in 24h (Experimental, 7). The diol 8a was esterified with TFA to both the primary alcohol and the secondary alcohol ( Figure 6a) and this was observed even after two hours (Figure 6b), but it was not observed that the esterification of the primary alcohol to 8b proceeded selectively before begins the esterification of the secondary one; in 24 h the reaction proceeded quantitatively and gave the compound 8c (Experimental 8), which is clearly observed in 1 H-and 13 C-NMR (Experimental,8). In 1 H-NMR, H5 was deshielded from δ = 4.23 ppm to δ = 5.20 ppm, and the H8-protons were deshielded from δ = 3.96 to 5.20 ppm, respectively 3.91 to 4.81 ppm. Both trifluoro acetyl groups appear in 13 .0]octane fragment, was esterified to both primary and secondary alcohols group, without the observation of the selective trifluoro acetylation of the primary alcohol before begins the trifluoroacetylation of the secondary alcohols (Experimental,9). In 13 C-NMR, both trifluoro acetyl groups appear in NMR spectra, at 157.41 and 114.26 ppm for COCF3 and COCF3 of the primary alcohol ester, respectively 157.10 and 114.61 ppm for COCF3 and COCF3 for the secondary alcohol ester. The deshieldings of the H5 and H7-protons from 5.27, 4.19 and 3.78 ppm to 5.27, 4.45 and 4.39 ppm and the deshieldings of the corresponding carbon atoms C5 and C7 from 75.65 to 80.60 ppm and 63.73 to 66.57 ppm are clearly observed as previously for 8a to 8c. The same shieldings of the vicinal carbon atoms to C7 and C5, C4 and C-6 from 55.02 and 40.81 ppm to 50.61 and 37.59 ppm are also observed, as can been seen in Figure 7. The double bond of the diol-alkene 10a was not affected during trifluoroacetylation of the primary double bonds to 10b. In 4 h, near half esterification of the hydroxyl groups had been done and both hydroxyl groups were esterified after 24 h (Figure 8, Experimental, 10). (where it appears together with H9 proton) and of the corresponding carbon atom C11 from 76.55 ppm to 81.73 ppm is also observed (Figure 9). Deshielding of the vicinal carbon atoms C10 and C12 from 40.39, respectively 56.35 ppm to 37.10 respectively 54.01 is also observed (experimental, 11). https://doi.org /10.37358/Rev.Chim.1949 Rev. Chim., 72 (2) The prostaglandin diol intermediate 14a, containing a secondary (11-OH) and an allylic secondary (15-OH) alcohol groups was esterified with TFA over weekend (4 days; in 24 h the bis-trifluoroacetylation did not ended), without concluding that the 15-OH allylic alcohol is selectively esterified than the 9-secondary alcohol ( Figure 10 and Experimental 12). The deshielding of the protons H11 and H15 from 4.00 and 4.54 ppm to the multiplets centered at 5.25 and 5.85 ppm and of the corresponding carbon atoms C11 and C15 from 76.52 and 70.50 ppm to 81.74 and 76.39 ppm is clearly observed ( Figure 10). The shielding of the vicinal carbon atoms, C10, C12 and C16 from 39.85, 56.35, respectively 71.81 ppm to 36.66, 53.79, respectively 68.47 is also observed on this diol. The triols 11 and 12 have low solubility in CDCl3 and NMR spectra were performed in DMSO. In both cases, the esterification with TFA is very slow and the 1 H-and 13 C-NMR spectra were not complicated by the formation of the trifluoroacetyl esters. At longer reaction time, the formation of the esters complicated indeed both proton and carbon NMR spectra and the signals couldn't be attributed https://doi.org/10.37358/RC.21.2.8428 (Experimental, 14, for compound 12a with TFA). The esterification with trifluoroacetic anhydride in TFA proceeded cleanly to the esterified compounds 12c and 11c, and both compounds were fully characterized; unfortunately, because of the different solvents in NMR experiments, the signals of the trifluoroacetylated compounds couldn't be useful to identify them between the compounds formed in DMSO-d6 + TFA (Experimantal 14 and 15). In conclusion, the use of TFA to clarify the NMR spectra of the alcohol compounds in DMSO-d6 at shorter times (even until hours) is beneficial, because the secondary trifluoroacetylated compounds are not formed in amounts to complicate the NMR spectrum. So, the transformation of the alcohol compounds 1a-10a and 13a-15a into the corresponding trifluoroacetates in CDCl3 + TFA in NMR tubes helped us to correctly attribute or to confirm the NMR signals of some protons and carbon atoms in the molecules, otherwise difficult. Some observations should be mentioned. The primary alcohol groups of the compounds 1a-7a and 10a were quantitatively transformed with TFA into the trifluoroacetylated compounds 1b-4b, 6b-7b, 10b in short time (mainly 24 h). Only the ethylene ketal group of the compound 5a was deprotected during the esterification of primary alcohol, giving the trifluoro acetylated compound 3b. The secondary alcohol groups of the compounds 8a and 9a were concomitant esterified with TFA with the primary alcohol groups (to the compounds 8c and 9c), without the observation that the esterification of the secondary alcohols begins after the primary hydroxyl groups were esterified. For compounds 14a and 15a, the esterification of the secondary 15-allylic alcohols proceeded not selectively against the 11-secondary alcohols; at longer time (3-4 days) both alcohols groups were esterified. It is worth mentioned that the trifluoroacetylation proceeded faster than that in DMSO-d6, where also an additional amount of TFA did not lead to completion of the esterification reaction. The results are to be taken into consideration when TFA is added in NMR tube, because not only the shifting of the couplings of the deuterable protons could help us to simplify the NMR spectrum, but also trifluoro acetylation could help us to more precisely attribute the signals in 1 H-and 13 C-NMR spectra to the protons and carbon atoms in the molecule. Conclusions TFA added to the solutions of the compounds 1a-10a and 13a-15a in CDCl3 esterified the primary and the secondary alcohols to the corresponding trifluoro acetylated compounds 1b-7b, 8c-10c and 13b-15c in 24 to 72 h in quantitative yield. Ethylene ketal group of the compound 5a was deprotected to the ketone group of the compound 3b, concomitant with the esterification of the primary alcohol. The δlactone group of the compound 7a and -lactone group of the ent-Corey lactone 9a were not opened during esterification of the alcohol groups. The esterification of the secondary 15-allylic alcohols of the compounds 14a and 15a proceeded not selectively against the 11-secondary alcohols, so at longer time (3-4 days) both alcohols groups were esterified. In DMSO-d6, the esterification of the secondary and primary alcohol groups of the triols 11a and 12a is much slower and didn't went to complete even after 7 days, though the amount of TFA was greater. In conclusion, TFA added in NMR tube, not only shifts the deuterable protons and simplify the NMR spectrum, but by trifluoroacetylation of the alcohol groups cloud make easier and more precisely attribution of the signals in 1 H-and 13 C-NMR spectra to the protons and carbon atoms in the molecule.
3,509.8
2021-05-07T00:00:00.000
[ "Chemistry" ]
Students’ Mathematical Thinking in Column Calculation and Algorithms This study aims to investigate primary school students’ mathematical thinking in column calculation and algorithms. The method used in this research is qualitative descriptive. The participants of this research were twelve Indonesian primary school students at grade 3 and 4 in Ciparay, Bandung, West Java. They worked to solve calculation and algorithm questions developed from TAL TEAM book Freudenthal Institute, Utrecht University, The Netherlands that were classified based on the strategies used. After analyzing their written works, interviews were organized to acquire further information about their mathematical thinking. The study found that students’ strategies in dealing with calculation algorithm consist of three strategies. The first strategy is the splitting strategy from units to tens column and then the interim results are combined. The second strategy is the splitting strategy used from tens to unit column and the interim results are added vertically digit-by-digit. The third strategy is the transition from splitting strategy from units to tens column consists primarily of using abbreviated column calculation for the interim results in each column. In addition, several students made common mistakes due to misconception about algorithm and arithmetical problems. Implication of this research for teaching and learning calculation and algorithms were described elaborately. INTRODUCTION The aim of education is not only the ability to correctly and quickly run prior task and command but also the ability to think and make decisions (Isoda, M., & Katagiri, 2012). To develop students' thinking, mathematics is known as a subject which can practice them. Mathematics as one of the subjects in school which plays an important role in preparing students, especially in developing thinking skills and solving complex problems (Herman, 2020). Mathematical thinking enables students' understanding towards the necessity of using knowledge and skills (Isoda, M., & Katagiri, 2012). It can be investigated by using contexts which are familiar for students. Using realistic mathematical context will enable students to become inventor because mathematics is an essential part in daily life as it is a form of human activity (Pramudiani et al., 2016). The context itself should be meaningful for students' mind. Zulkardi & Putri (2010) stated that the context is a key point for students in developing mathematics. Context can be developed from phenomenological exploration. It will enable students to make symbol or model of situation for progressive mathematization. Models are fundamentally used to raise a concrete situation of starting point for promoting formal mathematics (Gravemeijer, 1994). Furthermore, it is stated that the transformation from model of into model for is specified in the four level structures, i. e. situational level, referential level, general level, and formal level. Concerning to the importance of cultivating students' mathematical thinking, in the present research, the researchers are interested to assess students' capability related to skills, knowledge, and development of students' mathematical thinking related to calculation and algorithm. Nelissen (1999) stated that knowledge is the result of activities perceived from learning. The contextual situation given to students related to addition and subtraction problem. It was administered to students subsequently from phenomenological exploration (using context) followed by formal mathematical problem (without context). This has purpose to assess how far the mathematical thinking of students related to the problem given. Besides that, the problems were developed through giving special problems in depth understanding and magical problems related to addition and subtraction. In this case, the students were expected to use their intuition to come to the idea of number sense. Hogarth (1992) identifies the three components that can promote students' intuition i. e. 1) creating awareness, 2) a framework for obtaining specific learning skills, and 3) practice. In this study, it is focused on students' mathematical thinking in calculation numbers and algorithms. The reason of why we choose calculation numbers because according to National Research Council [NRC], 2001(Anghileri in Purnomo et al., 2014 number is one of the most basic mathematical concepts in the primary school that has purpose in (1) resolving daily-life problems, (2) becoming the basis of all mathematics curriculum, and (3) Algorithms are used when operating with relatively large whole numbers and multi-digit decimal numbers that are difficult to calculate mentally in a fast and simple fashion. At any rate, this is the way it used to be, but nowadays, people can use a calculator in many of these cases. In the above example, however the numbers are too small to make it worthwhile using a calculator. Even using the addition algorithm may be excessive for such a simple problem. The general opinion is that students in grades 3 and 4 must be able 1 to solve such problem without using an algorithm (TAL TEAM, van den Heuvel-Panhuizen, 2008 term "column calculation" is used, it refers to the term "algorithm calculation" (the third strategies). In the case of addition, algorithm calculation can therefore be considered as a natural extension and closure of column calculation and mental arithmetic, the final step of an "three processes" of progressive schematization and automatization of arithmetic operations (TAL TEAM, van den Heuvel-Panhuizen, 2008). Column (mental) calculation and algorithm calculation are therefore related. At the second half of the 20 th century, there was a change to the existing standard algorithms (TAL TEAM, van den Heuvel-Panhuizen, 2008). In subtraction, for example, the students were encouraged to work with groups of digits where required such as follows: 967 348 - 619 When 7 -8 is impossible, therefore usually in traditional algorithm the students are taught to borrow a ten from the neighboring column. However, sometimes students only can do the algorithm procedures without understanding what the meaning behind it. They just follow the teacher's instruction. This becomes dangerous when students only know the procedure without understanding the concept. The ability to use mathematical thinking is even more important than knowledge and skill, because it enables driving of the necessary knowledge and skill (Isoda, M., & Katagiri, 2012). Therefore, this research has aim to assess the Students' Mathematical Thinking in Column Calculation and Algorithms, Puri Pramudiani, Tatang Herman 1931 mathematical thinking of students in doing calculation and algorithm and also make a recommendation of the design activities and the context for the teaching and learning as well as suitable with the purpose of the aim of education to develop students' knowledge and skill through mathematical thinking. State of the Art As mentioned previously, students' mathematical thinking is very important as a key point to solve the problem related to mathematical problems. However, according to (Zulkardi, 2003), mostly 'local' mathematics textbooks contain primarily sets of rules and algorithms and they lack applications that are experientially real to the students. Furthermore, it is stated that in fact, the results of the tests showed that most students lacked comprehending of the basic skills that they should have studied in primary school and in daily life problems. The meaningful context can promote students' learning from informal level to preformal level consisting mathematical ideas (Pramudiani et al., 2011). Research Urgency This research is very important since mathematical thinking is a core of students in doing mathematical context. Mathematical thinking plays as a drive for students' understanding which underlies knowledge and skills (Isoda, M., & Katagiri, 2012). When students only can do traditional algorithm procedure without understanding the mathematical concept, it becomes dangerous for them to solve mathematical problem related in daily live because mathematics is a human activity (Freudenthal in Pramudiani et al., 2011). METHOD To reach the goal of this research, the qualitative descriptive research was chosen using the procedures such in the following figure: Picture 1. Scheme of Data Analysis Technique by Miles and Huberman (1992) The data analysis technique involved data collection, data reduction, data presentation, and conclusion. Creswell & Creswell (2018) stated that the characteristics of qualitative research, some of which are the natural setting, the meaning of the participants, and the theoretical views. Therefore, to reach the goal of this study, the information about students' mathematical thinking towards calculation algorithms were investigated elaborately through students' written work, observation, discussion, and interview, and then the result was analyzed by applying the conception of calculation algorithms. Participants Purposive sampling method were chosen to select the target of the research because in COVID-19 pandemic, most schools in Indonesia implemented learning from home policies to prevent the spread of corona viruses. Therefore, some students who were easily accessible by researchers become the research subjects. They were primary school students at grade 3 or 4 (9 or 10 years old) in Ciparay Bandung West Java. The characteristics of the students were intermediate level. The total numbers of these students were twelve students. However, there were only ten students who could be analyzed in detail because two of them still could not read text properly. Afterwards, interview was organized to all ten students who were selected after analyzing data. The researchers chose them by considering their written works as some of them showed similar strategies, mathematical thinking, mistakes, and misconceptions. Calculation Questions Ten questions of calculation used in this research were developed from "Children Learn have been modified using Indonesian context included: addition and subtraction problem (both using context and without context), special problems in depth understanding and fun, and magical problems related to addition and subtraction problems. In addition to developing critical thinking and learning to believe the power of their own thinking, the students also need to be helped to find pleasure in mathematics (TAL TEAM, van den Heuvel-Panhuizen, 2008). Addition and subtraction problems using Indonesian context allowed the researchers to figure out how students work with those contents using phenomenological exploration and doing progressive mathematization afterwards. Subsequently, the students were given special problems in depth understanding and magical problems related to addition and subtraction problems in order to evaluate how far they understand the calculation problems in fun ways. Concerning to validity of the instruments, the researchers discussed it to three experts in mathematics education. Improvement and suggestions were acquired. Procedures The study involved combining data through a written test and interviews. Therefore, the data consist of two types: written data and interview data. The test was given to 12 primary school students. They were primary school students at grade 3 or 4 (9 or 10 years old) in Ciparay Bandung West Java. Collecting data consisted of 3 phases. The first phase was held on 9 th December 2020. In the first phase, the students' initial knowledge towards calculation and algorithm were assessed. In this stage, they were asked to accomplish four questions. First question was about addition problem using context and second question was about addition problem without context. In the third question they were asked to solve subtraction problem using context and fourth question was about subtraction problem without context. In the first week, the researchers analyzed the students' written works and classified them into several strategies. The second phase was held on 16 th December 2020. In the second phase, the researchers explored the students' mathematical thinking using broader problems. It was about inkblot problems (taken from (TAL TEAM, van den Heuvel-Panhuizen, 2008) and magical problems which has adjusted to Indonesian context. In this stage, the researchers were not only giving the written works but also the discussion class was raised after they finished their works. After analyzing the students 'written works, on the third phase held on 17 th December 2020, the researchers organized interview to acquire further information and justification related to students' work by asking them to explain their strategies and concepts used. To do this, the questionnaires were prepared beforehand to guide the researchers during interview. When interviewing, the health protocol and keep distancing were applied because when the research was conducted it was still on the COVID-19 pandemic situation. In this stage, the students described and gave more explanation about their answers, and perhaps their mathematical thinking could be observed. Data Analysis The results of students' written works were compiled. The students' written works were labelled sequentially from A1 to A12 in order to help the researchers in identifying their work during analysis. Afterwards, each question was analyzed elaborately. Establishing categories of students' strategies were conducted by three sequential stages. The first stage was to evaluate students' initial knowledge towards calculation and algorithm. In other words, we analyzed each students' work on addition and subtraction problems both using context and without context subsequently. The differences expression in how students understood the problems, they chose appropriate strategies (either using column calculation or algorithm including splitting strategies). The second stage was to dig up their mathematical thinking by applying special problems in depth understanding and fun, and magical problems related to addition and subtraction problems, that is, the teaching and learning process through discussion class was held in this stage. Then, the third stage was to gain students' mathematical thinking which consisted of expressing students' reasoning towards their work and explaining the strategies they used. In this stage, the researchers asked them whether they used their strategies from their own thinking or they just followed the instruction from teacher in their class. Confirming students' understanding towards number calculation and algorithm, we made a tricky question by asking them whether can we switch the order of algorithm in column calculation. For example, when they solved the addition problem using number column calculation, they get used to do it from right to left (from units to tens). In this case, the researchers asked them that "Can we start the calculation from left to right (from tens to units)?". Some of them answered that "Yes, we can". After they declared that they could do it, they proofed it in that way. However, they made mistake when they combined tens and units, so that they got confused why the results were different. As well as to subtraction problem, when they stated that they could not subtract the above number with the bottom number because the above number was less than the above number, usually they "borrowed" 1 from the left number. However, some of them could not explain why they should do this. They just said that their teacher asked them to do so. Concerning to students' common mistake, the analysis was conducted by evaluating each question based on the contents of calculation. Since there were six problems, we all students' response were investigated towards problems one by one. Then, all students' mistakes were listed and categorized it based on their reason. In determining types of students' strategies and their common mistakes, the researchers classified it based on literatures. Consultation to three experts in mathematics education was conducted and eventually approval was acquired. In this study, belief could be depicted through dependability, transferability, credibility, and confirmability (Guba & Lincoln in Aziz et al., 2017). Various students' reasoning in the interview questions and data collection procedures were explained. Interviews and all the activities held in this research were recorded and can be accessed in Youtube so that there was not any losing information. Theoretical purposive sampling and further description were held to confirm transferability (Aziz et al., 2017). Triangulation also was organized whereas the researchers used multiple methods of data collection such as students' written works, interview, investigating questions, and literature review. Finally, students' reasons for applying the strategies and confirming their difficulties and including students' categories were described further in the result and discussion. RESULTS AND DISCUSSION The results of the analysis showed important data considering students' strategies and common mistakes while solving the calculation problems. Therefore, the students 'strategies were classified into three categories namely: 1) splitting strategy from units to tens column and then the interim results are combined.; 2). splitting strategy used from tens column to unit column and the interim results are added vertically digit-by-digit, and 3) the transition from splitting strategy from units to tens column consists primarily of using abbreviated column calculation for the interim results in each column. Based on the focus of this research, all of which were elaborated by describing important data (students' written responses or interviews). Therefore, the three strategies of number calculation and algorithm in each problem were included as important discussion elaborately of several students' common work. Before analyzing data in detail, the researchers labelled students from A1, A2, A3, A4, A5, A6, A7, A8, A9, A10, A11, and A12. However, two of them (A1 and A4) could not be analyzed further because they still could not read the text properly. Both of them were in grade 3 (9 years old). So, there were only ten students who were analyzed completely. Students' Strategies in Addition Problem The first stage is assessing students' initial knowledge towards calculation using context. The problem was given to students in Bahasa Indonesia, while in this report the problem was translated into English version. Herewith the problem given to students: From ten students, almost all of them could solve the problem correctly. Seven of them, namely A2, A6, A7, A9, A10, A11, A12 used column calculation with algorithm and described it in following ways: Picture 2. Students' Answer Using Column Calculation When they did calculation, they used splitting strategies from units to tens. When they added units become more than 10, then they just wrote the interim number in column unit, and then added 1 which has meaning 10 to the column of tens. When the researchers asked them why they used that procedures, they answered that they knew it from their teacher or their parents. However, none of them used symbol or model to become a bridging from contextual situation to progressive mathematization. Although they did not make symbolization, all of them could give reasoning that why they solved using addition procedures, it was because when the candies of Upin and Ipin were added, then the total numbers become larger. There was one student (A5) who used splitting strategy from tens to units and answered it correctly. However, he made mistake when he wrote number one hundred as 1000. The strategy of A5 was in the following way: Picture 3. Student' Answer Using Splitting Strategy 1936 Jurnal Cendekia: Jurnal Pendidikan Matematika, Volume 05, No. 02, July 2021, hal. 1928-1942 Besides that, there was one student (A8) who used column calculation but wrong answer, such in the following way: Picture 4. Student' Answer Using Column Calculation with Common Mistake When the researchers interviewed A8, he showed that he did not calculate numbers carefully. This mistake is common encountered in students because from the observation held by the researchers, most of students still count numbers manually (using fingers). There were only several students who could calculate mentally and used the concept of tens and units. Besides that, there was only 1 student (A3) who used unknown strategy and answered it incorrectly. However, he started up to use symbol to interpret the candies. Picture 5. A3's answer To interpret the students' strategy, the researchers asked his reason through interview. In this case, the researchers thought that he did not understand mathematical problem. Although he could read text rightfully, but when the researcher asked him to write numbers, he could not do it correctly. For example, when he was asked to write number 4, he wrote in following ways: Picture 6. Symbol 4 Written by A3 From picture 5, we can see that A3's knowledge in formal mathematization was still lack. Instead of using mathematical symbol, he preferred to use model of candies (model of situation). Puri Pramudiani, Tatang Herman 1937 The investigation about students' abilities in doing addition, the researchers gave similar problem without context by asking them to answer 463 + 382 = … From ten students, almost all of them could solve the problem correctly. Six of them, namely A2, A6, A7, A9, A10, A12 used column calculation with correct reason such in the following way: Students' Mathematical Thinking in Column Calculation and Algorithms, Picture 7. Students' Answer Using Column Calculation There were one student (A5) who answered correctly without column calculation. When the researchers interviewed him, he did calculation mentally. Three students (A3, A8, and A11) answered incorrectly but they knew that when there was symbol (+) the result should become larger, but their results in calculation were wrong. The next calculation was about subtraction. Herewith the problem given to students: Ms. Ros bought 75 eggs. On her way, 18 eggs were cracked. The number of Ms. Ros's eggs that are still good … From ten students, there were six students who answered correctly with correctt reasoning, namely A2, A3, A6, A7, A8, and A12. They used column calculation with algorithm and described it in following ways: Picture 8. Students' Answer Using Column Calculation From picture 8, we can see that when they did calculation, they used splitting strategies from units to tens. When they subtracted above number with below number whereas above number was less than below number, they borrowed 1 which means 10 from left and added it to the above number. When the researchers asked them why they used that procedures, they answered that they knew it from their teacher or their 1938 Jurnal Cendekia: Jurnal Pendidikan Matematika, Volume 05, No. 02, July 2021, hal. 1928-1942 parents. However, none of them used symbol or model to become a bridging from contextual situation to progressive mathematization. Although they did not make symbolization, all of them could give reasoning that why they solved using subtraction procedures, it was because when the eggs of Ms. Ros were cracked along her way, then the total number become smaller. They did calculation in wrong way such in the following way: Picture 9. Students' Answer Using Column Calculation with Common Mistake From picture 9 we can see that the students had already used the column calculation and knew how they should do when the above number was less than the below number by borrowing ten from left column, but they did mistake when they subtracted tens column. They forgot that the number in ten columns above should be subtracted by 1 which means ten. Therefore, the result became wrong. To investigate students' abilities in doing subtraction, the researchers gave similar problem without context by asking them to answer 967 + 348 = … From ten students, almost all of them could solve the problem correctly. Seven of them, namely A2, A3, A5, A6, A7, A10, A12 used column calculation with correct reasoning such in the following way: Picture 10. Students' Answer Using Column Calculation The rest 3 students (A8, A9, and A11) answered incorrectly, and their mistakes were commonly that they calculated the number in wrong result. Special Problems in Depth Understanding The special problems that were discussed in this research aim at deepening insight into column calculation, and especially into algorithm calculation. However, because their puzzle-like they also function as part of recreational arithmetic and consequently help to stimulate the acquisition of mathematical attitude (TAL TEAM, van den Heuvel-Panhuizen, 2008). There were two types of problems given to students i. e. inkblot problems and magical problems. Both of them were taken from TAL TEAM, Panhuizen, M.H. book developed by Freudenthal Institute (FI) Utrecht University and National Institute for Curriculum Development (2001) and had been adjusted to Indonesian context. The problem is as follows: Inkblot Problem In inkblot problems, an inkblot covers a part of the number. As a result, the answer cannot be determined directly. However, it is possible to make the correct choice from several possible answers by using reasoning and calculating. Herewith the problem: Picture 11. Inkblot Problem From 10 students, there were 7 students (A2, A5, A7, A8, A10, A11, and A12) who can answer correctly. However, there was only 1 student (A2) who can give the reason using mathematical idea why he chose that answer such in the following way: Picture 12. A2's answer in Inkblot Problem From picture 12, we can see that he answered 105 because he paid attention to the left column numbers. He said that when the left column numbers were same, then he added it became 105. Whereas the six students could answer correctly without giving the detail reasoning. When the researchers interviewed them, they said that they just estimated the numbers. However, three students (A3, A6, and A9) could not answer at all, and said that it was difficult for them because there was no information about the number covered by ink. From this problem, it can be concluded that the students had already possessed the basic of mathematical thinking. When the researchers asked them whether they had ever encountered the similar problem or not, all of them said that it was their first time to solve this type of problem. In fact, most of them could solve it correctly, event though some of them only estimated the numbers and without gave reasoning, but the researchers believed that when they get used to it, then their mathematical thinking will be developed. Magical Problem At the end of the section written test, the researchers conducted the mini teaching and learning activities by asking the students to play magical problem. The situation was set up like they really played magical problem. They were asked to follow the researchers' instruction such as follows: In this activity, the students were asked to choose 1 number with two different digits. The number chose by each student should be different. After that, they were asked to make a new number by switching the order of the digits, then they have to subtract the smallest number from the largest. After that, they were asked to switch the order of the digits in the answer and added to the result of subtraction. Finally, they had to show their final result. Surprisingly, all the result was 99 even though they chose different numbers. When they were asked how could this happen, no one of them answers correctly. In fact, the result of the subtraction step is a multiple of 9 can be understood by calculating with deficits. If the students recognize that the total of the digits of the table of 9 is always 9, when they are added together the final result 9 appears in both the units place and in the tens place. Therefore, the answer is always 99. Unfortunately, in this case the students' thinking had not yet reached that level but designing activities in this way enabled students to perform mathematical learning in fun way and they were very enthusiasm doing this. All the activities have been recorded and can be access in YouTube: https://www.youtube.com/watch?v=wfyBbVUwq2A&feature=youtu.be. Column calculation consists of the standard procedures of addition, subtraction, multiplication, and division. However, in this research we focus only to the addition and subtraction including algorithm. The finding of this research suggests that the procedures implemented by students were closer to traditional algorithms using column calculation rather than mental arithmetic and estimation. The students learned the procedures of the vertical form of calculation through a gradual process of schematization and abbreviation. When they were asked how they knew this procedure, they said that they just followed the examples from teacher in their class or from their parents in their home. Because in this pandemic COVID-19, all students in Andir Ciparay Bandung studied from home, and of course the parents have an important role in educating students. However, the teacher still gave the learning material and assignment through WhatsApp group. Based on information from students, the mathematical problem given were taken from textbooks and it was rarely encountered problem with depth understanding like special problem given in this research (inkblot problem and magical problem). This suggests that we have to enrich contextual problems for applying the skills for students that function as a support point for developing their mathematical thinking. CONCLUSION According to previous sections, it showed that students used various strategies in solving the calculation problems. Students 'strategies were classified into three categories namely: 1) splitting strategy from units to tens column and then the interim results are combined.; 2) splitting strategy used from tens column to unit column and the interim results are added vertically digit-by-digit, and 3) the transition from splitting strategy from units to tens column consists primarily of using abbreviated column calculation for the interim results in each column. The common mistakes that students made in this research were caused by misunderstood about the concept of calculation and carelessness of arithmetical calculation. In general, the students' strategies to solve calculation problems were using algorithm as a basic procedure known from their teacher and parents. When they were asked special problems in depth understanding, there were only some students who could solve it. It means that in this research, some students were not get used to develop their mathematical thinking. They just followed the procedures given by the teacher or parents. However, when the problem developed in such a way like magical problem, they could solve it with great enthusiasm. It means, for further study, it will be necessary to develop the context and also teaching and learning process using mathematical ideas in order that students can develop their mathematical thinking. ACKNOWLEDGMENTS We are very grateful to twelve participants from primary school students grade 3 and 4 in Ciparay Bandung West Java and for their parents who were allowed their children to become the research subjects. Besides that, thank you to my supervisor who have given suggestion and worked well together, so this
6,819.4
2021-07-12T00:00:00.000
[ "Mathematics" ]
Optimising compression testing for strain uniformity to facilitate microstructural assessment during recrystallisation Predicting the kinetics of recrystallisation in metals, and recrystallised grain size distributions, is one the key approaches to controlling and refining grain size during metal processing, which typically increases strength and toughness/ductility. Recrystallisation prediction models and equations are supported by lab-based simulations that can systematically assess recrystallisation over a range of temperatures and strains for different materials and starting grain sizes. This work uses modelling and experimental verification to assess the different commonly used compression test sample geometries to determine strain uniformity and potential sources of error in microstructural assessment and proposes a modified geometry that increases the area of constant known strain. Whilst flow stress measurements in all samples showed good agreement. It has been shown that the new plane strain geometry offers a more consistent, homogeneous strain through the sample such that the large number of grains needed for accurate grain size distribution measurement can be readily achieved. Over double the area of ± 10% of the target strain was achieved in the modified plane strain sample compared to a conventional uniaxial specimen, this area was also shown to be more conducive to metallographic assessment and offers in excess of 1500 grains of 250 μ m to be assess per cross-sectional slice. Introduction Recrystallisation can be one of the most effective ways of refining grain sizes in metallic structures. In many manufacturing processes parts are not produced to final geometry through casting and need subsequent processing, such as hot rolling or forging, to achieve the desired shape. The additional processing can give added benefit of grain refinement via recrystallisation using the deformation stored energy. This mechanism is heavily used across many industries such as steel [1], nickel super alloys [2], copper [3] and aluminium [4] and is most prevalent in production methods such as hot rolling, forging [1] and cold rolled/annealed processing [5]. Whilst many materials can show significant refinement through other means during casting and heat treatments [6,7] through nucleation control, recrystallisation remains a dominant field of interest across metallic material. Recrystallisation is an essential tool for steel processing to achieve properties and development of fundamental understanding and predictive models has been harnessed to drive product developments. Since its first implementation in the 1950's thermo-mechanically controlled rolling (TMCR) has proven to be invaluable in the steel industry. By controlling reheating and the amount of rolling reduction around critical temperatures, such as the microalloy precipitate dissolution temperature and the TNR (temperature of no recrystallisation), then strength can be seen to increase >20% in conventional C-Mn steel [8] and TMCR has been instrumental in the development of high strength low alloy (HSLA) and advanced high strength steels (AHSS), which show superior strength/elongation ratios [9]. No single experimental method is used to determine recrystallisation behaviour, or to provide data to support/validate recrystallisation models, encompassing the full range of temperatures, strains and strain rates experienced for different materials and processing conditions. Uniaxial compression tests (UCT), plane strain compression tests (PSCT) and torsion tests (TT) are used to study the hot deformation behaviour (flow stresses) and evolution of microstructure for metals (particularly steel), for example supporting the development of predictive equations for recrystallisation [10][11][12] during hot rolling/forging. A lot of recrystallisation studies have used load feedback during the tests, either through double hit testing [13] or stress relaxation testing [14], to obtained a rapid assessment of the flow stresses and rates of recrystallisation, which also allow predictions of the mill loads needed to roll at a specific temperature. Flow stress measurements at a given temperature are affected by recovery, recrystallisation and grain size which means it is difficult to identify the exact contribution of each or to determine microstructural parameters such as recrystallised grain size distributions from these types of tests. Assessment of recrystallisation fraction and recrystallised grain sizes can be carried out using UCT, PSCT and TT with interrupted quenching of the sample to observe the microstructure for different strain, strain rate, temperature and hold times. UCT and PSCT are the most common approaches for microstructural evaluation with systems such as Gleeble [15] or Servotest [16] allowing rapid assessment. However, a drawback of these methods is the barrelling effect due to friction at the interface between the sample and anvils [17]. This leads to a characteristic strain distribution with "dead zones" at either end of the sample, by the anvils, and a high strain concentration in the sample centre. The inhomogeneity in strain distribution was reported by Chamanfar et al. [18] for uniaxial deformation of 10 mm Ø x 15 mm cylindrical samples deformed with 0.83 macroscopic strain and a 0.1 friction coefficient. The predicted strain in the dead zone was 0.6 and the peak strain reached 1.1 in the centre of the sample. Bennett et al. [19] developed FEM models of both isothermal axisymmetric compression and uniaxial compression testing and compared the predictions to measured flow stress behaviour using the Gleeble thermo-mechanical simulator. It was found that the errors in stress prediction and measurement can be as large as 20% due to the non-uniform deformation caused by interfacial friction between the sample and anvils, generated heat during deformation and a non-uniform temperature field. Strain inhomogeneity means that microstructural examination needs to be undertaken with care in order for the measured grain size to be related to the imposed local strain. Torsion allows much higher strains and strain rates to be imposed than for UCT and PSCT, for example enabling large strain multi-pass rolling/forging schedules to be simulated in one test [12,20]. A disadvantage of this method is the presence of a strong strain gradient through the radial axis of the sample, resulting in very limited material with uniform strain for microstructural analysis (with added complication in relating the microstructure to imposed local strain at the location characterised) and stress measurements being averaged across the whole sample. In order to precisely understand the local strain regions in torsion testing, then this is commonly coupled with finite element modelling with dynamic recrystallisation, work hardening etc impacting the location used for optical analysis [21]. This testing has been successfully carried out at some institutes for which it has been used for multipass rolling simulations [12], however accuracy of this method relies on modelling and careful experimentation by experienced users due to the large strain fields produced. Forrest and Sinfield [22] showed that for a single 360 • rotation then the volumetric strain varies across the radial plane at the central length position between 0.8 and 2 for a standard 9.5 mm diameter hollow sample with a 2.4 mm wall thickness, resulting in a range of strain rates from 3 to 34 s − 1 . Under these conditions there will be a strain gradient across individual grains (assuming a prior austenite grain size of 250 μm, representative of a typical reheated steel slab grain size, with a strain variation of 0.1 occurring over that length scale) indicating the challenge when trying to assess grain size changes due to recrystallisation. To compensate for the strain inhomogeneity that arises fitting parameters/corrections have been proposed to relate the measured flow stress from the UCT, PSCT or TT to the macroscopic measured flow behaviour of the material [17,23,24]. However, for microstructural characterisation to relate microstructural parameters to applied strain/strain rate there is a need for samples that provide sufficiently large areas of homogeneous strain for hundreds of grains to be characterised. This is particularly important for measurements such as full recrystallised grain size distributions [25] and the influence of segregation on recrystallisation kinetics [26]. This paper reports on the strain inhomogeneity in standard UCT and PSCT samples used in thermo-mechanical simulations, modelled using FEM and verified experimentally and proposes alternate sample configurations to provide larger areas of uniform strain for microstructural characterisation. Finite element modelling The three most common sample geometries used for UCT and PSCT, summarised in Table 1, were modelled initially to determine the strain distribution across the samples. Modified sample geometries were also determined to provide increased strain uniformity and area of known strain, discussed in the results section. Deform v12.0.1 software was used to model the strain distribution for the three geometries in Table 1. A macroscopic strain of 0.3 was applied at a strain rate of 1 s − 1 . The anvils were considered rigid bodies of 10 mm thickness with a 7 • draft angle. The test simulations were carried out at room temperature so any influence of thermal conductivity can be neglected in this investigation. A friction coefficient of 0.15 was used as contact conditions between the sample and the anvils in all conditions, this was verified through measurement of the barrelling observed after deformation and is consistent with literature [18,19,30]. Histograms for the area corresponding to different strain values were generated from the simulations by remeshing the surface of the y-z plane at x = 0 (centre of deformation) ensuring a consistent mesh volume. Materials and experimental details Verification of the modelling was carried out using stainless steel 316 that was initially annealed at 1050 • C for 1hr to ensure a fully recrystallised microstructure [31]. Compression testing was carried out in a Gleeble HDS-V40, three tested were carried out in each condition. Uniaxial compression testing was carried out using 0.1 mm graphite foil and 0.1 mm tantalum foil on both contact surfaces (to be consistent with high temperature testing [23,27]). Deformation was carried out at a strain rate of 1 s − 1 to a strain of 0.3. All samples were sectioned using a 1 mm diamond blade on a Buehler Isomet precision cutter before mounting in Bakelite and polishing to a ¼ micron finish. Microhardness was carried out using a Wilson VH3300 microhardness system with a load of 500 g. Indents were spaced at 250 μm intervals and 500 μm away from the outside edge of the sample. Hardness values were converted to stress using Δσ y = 3.03ΔH, where Δσ y is the change in stress (MPa) and ΔH is the change in hardness. This equation was taken from the literature and shows a regression fit of 0.88 for a ΔH of up to 300Hv for a range of austenitic stainless steels [32]. To determine the strain distribution in the deformed sample the measured hardness values were converted to strength values and the strength was the correlated to the imposed strain required to generate that strength, using Fig. 1. Fig. 1 was generated using a UCT1 sample on a HDS V40 Gleeble to a strain of 0.55. [28,29] a Exact dimensions vary in the literature, however 1.5-2 aspect ratio (length: diameter) is typically used and has been considered in this work. b 10 × 15 × 50mm was used in this study. The length of the sample does not influence the straining of the material (verified by modelling) and a longer sample was used to accommodate the experimental verification setup on the Gleeble HDS-V40. Uniaxial compression The predicted strain distribution in sample UCT1 is given in Fig. 2 after deformation to a macroscopic imposed strain of 0.3. It can be seen clearly that a strong strain distribution exists throughout the sample. Due to the interfacial friction between the sample and anvils, barrelling occurs which leads to a very typical "cross" shape strain map with high strains in the sample corners. Whilst the mean strain across the sample is 0.278, close to the imposed macroscopic strain, the strain at the core shows a much higher value of 0.44. Any microstructural evaluation, for example to relate recrystallised grain size to strain, is challenging due to the strain variationfor example the need to know the exact location of the microstructural measurement to relate to the local strain and limited area of uniform strain. Fig. 3a shows the spatial distribution of the strain that falls within ±10% of the 0.3 applied macroscopic strain. The total area shown equates to around 34.8% of the sample, or 52 mm 2 , for the sample cross section. This microstructural area will equate to that occupied by around 700 grains with an equivalent circle diameter of 250 μm (i.e. typical of a reheated prior austenite grain size for steels prior to TMCR), this is at the minimum that has been suggested to be measured to obtain a full grain size distribution for recrystallisation kinetics (where 700-1000 [33] or even 2000 grains [34] have been reported), but would be sufficient to assess mode grain size. Not only is this a reasonably small region, but the distribution of this field would make consistent sectioning and metallurgical assessment difficult (further discussion on sectioning sensitivity is discussed later). There is, however, a region of reasonably uniform, albeit higher strain, in the centre of the sample, Fig. 3b. This region can be seen to have a strain of 0.38-0.44 (average of 0.4) and only comprises 9% of the cross sectional area of the sample, which would allow a maximum of 180 grains of 250 μm to be measured from a single section. This sample geometry gives high strain at the corners and a dead zone at either end of the sample. At high temperatures this can be further exacerbated by the presence of a thermal profile along the length of the sample (from the cooler anvils), giving rise to increased strain inhomogeneity. It is important to consider the friction effects and thermal gradients affect barrelling, and hence strain inhomogeneity, which can be determined using the following equation: Where B is the barrelling coefficient, h o and h f are the initial and final height of the specimen. D o and D f are the initial and final diameter at the sample core. The simulation predicts a barrelling coefficient of 1.05 using a friction coefficient of 0.15 and uniform temperature, which is in agreement with modelling carried out by Bennet et al. [19]. However it has been reported in the literature [35] that for a high temperature testing (up to 1200 • C) thermal profiles can cause barrelling coefficients to be in excess of 1.15 and as such a much greater strain distribution is possible within samples. It should be noted that thermal gradient is more of a concern in direct joule heated samples such as in a Gleeble rather than furnace testing. The UCT2 sample results follow a similar trend to those for the UCT1 sample, Fig. 4. The central region shows a slightly higher strain than in UCT1 of around 0.42 (Fig. 4), this is due to the samples aspect ratio being closer to 1, which results in the shear bands, formed in the corners due to friction, increasing their interaction with each other at a more focused point in the sample centre. The UCT2 sample geometry shows a similar region of strain within ±0.03 (10%) of the macroscopically applied strain (39% volume of the sample) to the UCT1 sample. When sectioning the sample in the Y-Z plane then this would give 47 mm 2 of consistent strain for microstructural analysis (Fig. 5), which would allow quantification of around 600 grains of 250 μm diameter. The central region experiences a strain that is ~ 40% higher than the macroscopic strain and the region of uniform strain is only 11% of the cross section, which would allow 160 grains of 250 μm diameter to be measured, suggesting this geometry is no better for microstructural analysis than UCT1. Plane strain compression PSCT in systems such as a Gleeble or Servotest has conventionally used a geometry similar to that highlighted in Table 1. Although this geometry does not abide by the b/w = 5 ratio (sample width:anvil width) suggested by Watts and Ford [36] for true plane strain condition, the true stress and true strain can be calculate by the following equations: Where ε is the true strain, σ is the true stress, h o and h are the initial and instantaneous sample height, σ f is the flow stress, w is the sample width and f is the spread coefficient defined by: Fig. 6a shows the modelled strain distribution for the PSCT sample. It can be seen that there is significant strain variation through thickness. The shear bands formed from the corner of the anvils meet at the sample core to give a high local strain resulting in a bimodal strain distribution in the Y-Z plane (Fig. 6b), where the dead zone at the top and the uniform strain in the core are connected by a steep strain gradient. In the x axis, however, a region of around + -2.5 mm from the central axis shows a strain distribution that is much more consistent than that of uniaxial compression UCT1 and UCT2 samples. Fig. 7a shows the region of ±10% of the macroscopic 0.3 strain for the PSCT sample. There is a very small band where the strain is 0.27-0.33 which would not be appropriate for microstructural analysis as it would be difficult to section accurately to this locationfor example to section in the X-Y plane to achieve a large microstructural region of uniform strain, since the 0.27-0.33 strain zone is < 1 mm thick in the Z-axis. Fig. 7b shows the region of largest consistent strain in the sample (0.44-0.5 for this sample), showing a band across the entire width of the sample. This region provides 31 mm 2 for analysis, moreover this region extends approximately ±2.5 mm thick in the x axis, which would allow 2-3 slices to be readily taken by sectioning for microstructural analysis to give analysis of >750 grains. Modified plane strain compression It was observed from the standard PSCT sample that shear strain generated from the corners of the anvils affect the core strain level in the central cross-section giving a higher local strain then macroscopically applied strain, with the severity of the shear strain being affected by the anvil geometry. Simulations were carried out increasing the anvil width, within the range possible within the Gleeble HDSV40 load capacity, until it was observed that the shear zones from the four anvil corners do not extend into the centre of the sample, giving more uniform strain in the core of the sample close to the macroscopically applied strain. The limitation to increasing the anvil width is the load capacity of the machine as well as the length of uniform hot zone that can be generated. For a Gleeble HDS-40 a hot zone of around 30 mm (length giving ± 5 • C from the core temperature) can be achieved and therefore 20 mm anvils are most appropriate to ensure all deformation is constrained in the hot zone, however for plane strain testing using a furnace to heat samples larger anvils could potentially be used to generate the region of uniform strain provided this is within any space constraints and load capacity of the machine (which will depend on the material and the test temperature). Fig. 8a shows the strain distribution in this new sample geometry, of 50 × 20 × 10 mm (LxWxH) and an anvil width of 20 mm. The strain through thickness at the central cross-section is more uniform than in Fig. 6 With an average of 0.3 strain, Fig. 8b shows the tight normal distribution of the area percentage against strain obtained from this geometry sample. In addition, this sample shows in a region of ±5 mm in the x axis (double that of the standard PSCT) that shows a consistent strain pattern and minimal interaction from the shear bands formed from the anvil corners, which allows multiple sections to be taken for microstructural examination, and also sectioning accuracy becomes less critical. Fig. 9 shows the distribution of uniform strain in the modified PSCT sample. A significant increase in strain uniformity can be seen, in particular the region of 0.3 strain ±10% is much increased (around 122 mm 2 ). This can be refined to a tighter tolerance of ±5% of the target strain (Fig. 9b) where 78 mm 2 falls within this range, allowing over 1500 grains for analysis from a single slice through the sample. The distribution of the consistent strain region is much more suitable for metallographic assessment. There is however, a region at the core that has a strain of 0.36. It is therefore suggested that a region of 6 × 6mm taken at the ¼ width of the sample (shown by the green box in Fig. 9b) gives excellent homogeneity and consistency with the macroscopic imposed strain allowing the best control and accuracy for recrystallisation studies. Table 2 summarises the strain distributions obtained by modelling from the four sample geometries, where the central region of consistent strain is much higher than the applied strain and is restricted to a relatively small region for the standard samples (UCT1, UCT2 and PSCT). Although using the region of higher strain gives a better region for metallographic assessment, the higher strain than that which was macroscopically applied makes test matrix particularly as this will have a knock-on impact on the strain rate and also any inter-pass time calculations during multi-pass simulations. The proposed modified plane strain geometry provides not only a larger area for analysis, but also increases the accuracy of the strain (both in terms of distribution but also in its agreement to the macroscopically applied strain). Experimental verification Samples of 316 stainless steel were tested in all 4 geometries shown in Table 2. The true flow stress curves can be seen in Fig. 10 (which were calculated using Equations (2)-(4) for plane strain and classical true stress/strain equations for uniaxial tests). The flow curves and the 5% proof stress for all geometries are very similar with the exception of PSCT which shows a much higher proof stress. This is consistent with Fig. 6b which showed the highest local strain, this in turn would result in the sample locally reaching the yield stress much earlier in the deformation compared to the other samples. Fig. 11 shows the hardness maps for the four geometries, all on the Y-Z plane at the central location (X = 0) equivalent to the strain maps shown in Figs. 3, 5, 7 and 9, which have been summarised as a histogram for the relative frequency of hardness values in Fig. 12. As with the flow The hardness values have been translated to an imposed strain value using the factor of 3.03 to convert the hardness to a stress and then relating the stress to an imposed strain from Fig. 1. Fig. 13 shows composite images made up from the predicted and measured regions of 0.3 ± 10% strain for all four sample geometries deformed to a macroscopically applied 0.3 strain. Good agreement can be seen for both the plane strain samples. UCT2 shows the high strain region at the core (Fig. 11), but the predicted dead zone at the top and bottom of the sample seems to be very small/not picked up by the hardness map, suggesting a lower friction coefficient occurred than used in the modelling. UCT1 however does not show good correlation between predicted and measured profiles. The uniaxial samples are very sensitive to small errors in sectioning and the amount of material that is removed during the preparation (grinding and polishing) stage. It can also be seen that the sample shows asymmetry with the bottom of the sample showing a larger dead zone than the top, suggesting a difference in friction coefficient (although the diameter at either end of the sample varied by < 0.2 mm), or a non planar section taken from the sample. This highlights further the sensitivity of the complex strain profiles seen in the UCT tests. Fig. 14 shows the inherent variability of the UCT testing approach, where direct repeat tests under identical conditions show variation in the strain spatial distribution (repeat tests showed the same flow stress curves in each case, and therefore the variability is local rather than related to the global test setup). Although large portions of all these samples fall within the ±10% of 0.3 strain, the location of these regions is not stable, making consistent microstructural assessment difficult. These tests have been carried out using brand new anvils and therefore the performance of these tests would be expected to decrease with more practical (i.e. partially worn) anvil conditions. Sectioning sensitivity Further variability between tests can be sourced to sectioning accuracy. A typical diamond blade cutting wheel has a thickness of around 0.5 mm, where sectioning cutting locations, non-planar cuts and the level of grinding before final polish, all adding to inaccuracies in the precise location of the comparison between the experimental data sets. The variability/consistency of the different compression methods can be seen in Fig. 15 where the strain distribution in the Y-Z plane has been plotted at various offsets to the central plane. It can be seen that the strain distribution in the modified PSCT even at 5 mm away from the central axis remains almost identical, allowing for multiple slices to be taken but also allows for a larger degree of error when sectioning the sample. The UCT sample however, shows varying strain patterns for the different slices, with 5 mm away from the central axis showing almost no dead zone but a higher proportion of 0.3 strain. Whilst the mode at this section shows a more preferential distribution, the variability with sectioning plane means that poor repeatability/consistency would be expected. The reduction in the proportion of the dead zone with increasing distance away from the central slice helps understand some of the variability and lack on dead zone seen Fig. 15. Any sectioning inaccuracy leads to a surface off the midplane being analysed, with the section out of the Y-Z plane explaining the asymmetry at the top and bottom of the UCT samples. Whilst cutting and grinding consistency can be improved, it is important to show here the variability that can arise even when producing a small number of samples. For this, the PSCT sample shows much more repeatability and less sensitivity to small variations in preparation. In addition to the PSCT showing a much larger strain variation, the excellent correlation between the predicted and measured strain makes this test much more reliable. Moving to a 20 mm anvil size, as with the modified PSCT sample, gives a predictable large area of uniform strain and offers the best setup for microstructural analysis post testing. The ideal section Considering all of the above, then the ideal section will have the following traits: • Uniform strain. • Strain that corresponds to the macroscopically applied strain. • Area of uniform strain is distributed such to be appropriate for metallographic assessment. • Has good tolerance to inaccuracies during sectioning and preparation. Then the ideal section can be seen in Fig. 16 which is taken in the x-z plane at the ¼ and ¾ position in the y axis. This has the added benefit of being in the deformation plane and as such is equivalent to looking at the rolling axis of strip products formed during rolling. This is the suggested best section that can be achieve for microstructural analysis of recrystallisation within the limits of lab based thermo-mechanical testing equipment. For softer alloys such as copper and aluminium, then wider anvils will give even greater strain uniformity for metallographic analysis, although this would provide minimal improvement to the accuracy of the flow stress curves. Conclusions Lab based recrystallisation studies are important for determining optimum industrial processing conditions (strain, temperature, strain rate) to achieve desired microstructural refinement. Several methods have been developed to characterise recrystallisation including uniaxial compression testing (UCT) and plane strain compression testing (PSCT). Flow stress analysis (double hit tests or stress relaxation) are used to determine recrystallisation kinetics, however microstructural analysis is required to determine recrystallisation grain sizes/grain size distributions and can also be used to assess recrystallisation kinetics. A high degree of strain homogeneity is required in the sample to give sufficient area for microstructural analysis. This paper considers the strain distribution in the most common compression test geometries used for recrystallisation studies and compares the strain distribution uniformity and area of uniform strain for metallographic assessment. Modelling and experimental validation has been used and a modified PSCT sample geometry is proposed. The main conclusions are: Both commonly used UCT sample geometries showed with an applied global strain of 0.3 showed a core strain of 0.4 and 0.42. In addition to this, although around 50 mm 2 of fell with ±10% of the applied strain, the distribution of this strain makes metallographic assessment difficult as even small offsets from the ideal cross section position reduces this uniform strain area. A standard plane strain geometry using 10 mm anvils showed a strong through thickness variation in strain with an average core strain of 0.5. With only a small area achieving the desired strain. A modified plane strain sample has been suggested which generates much greater uniformity of strain providing an area of 122 mm 2 with ±10% of the applied strain on a cross section slice. Over twice the amount of any other geometry, but also more favourably spatially distributed. Experimental verification was carried out using stainless steel 316 in a Gleeble HDS-V40 for the different sample geometries followed by microhardness mapping. Good agreement was seen between the modelled and experimental strain values for the plane strain sample geometries, however uniaxial compression testing showed large amounts of asymmetry and variability between repeat tests (consistent with the proposed susceptibility of the sample to sectioning errors). Appreciation for accuracy needed during section was also taken into consideration, with the plane strain samples showing a much greater tolerance to inaccuracies of cutting and grinding of samples to assess the central plane. Therefore, it is suggested that for metallographic assessment of recrystallisation, such as recrystallised grain size distributions, a plane strain sample geometry sample tested with >20 mm width anvils provides excellent strain uniformity in the samples and a large area suitable for microstructural characterisation. Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
7,015.4
2021-09-01T00:00:00.000
[ "Materials Science", "Engineering" ]
Global neighbourhood domination A subset of vertices of a graph is called a global neighbourhood dominating set(gnd set) if is a dominating set for both and , where is the neighbourhood graph of . The global neighbourhood domination number(gnd number) is the minimum cardinality of a global neighbourhood dominating set of and is denoted by ( ). In this paper sharp bounds for , are supplied for graphs whose girth is greater than three. Exact values of this number for paths and cycles are presented as well. The characterization result for a subset of the vertex set of to be a global neighbourhood dominating set for is given and also characterized the graphs of order having gnd numbers 1 2 1 2 . Subject Classification : 05C69. Introduction & Preliminaries Domination is an active subject in graph theory, and has numerous applications to distributed computing, the web graph and adhoc networks.For a comprehensive introduction to theoretical and applied facets of domination in graphs the reader is directed to the book [4]. A set of vertices is called a dominating set of if each vertex not in is joined to some vertex in .The domination number ( ) is the minimum cardinality of the dominating set of [4]. Many variants of the domination number have been studied.For instance a dominating set of a graph is called a restrained dominating set if every vertex in is adjacent to a vertex in as well as another vertex in .The restrained domination number of , denoted by ( ) is the smallest cardinality of the restrained dominating set of [3].A set is called a global dominating set of if is a dominating set of both and its complement .The global domination number of , denoted by ( ) is the smallest cardinality of the global dominating set of [6].A dominating set of connected graph is called a connected dominating setof if the induced subgraph is connected.The connected domination number of , denoted by ( ) is the smallest cardinality of the connected dominating set of [7].A dominating set of connected graph is called an independent dominating set of if the induced subgraph is a null graph [4].be a connected graph, then the Neighbourhood Graph of is denoted by ( ) ( ) and it has the same vertex set as that of and edge set being { ( ) ( ) ( )} [2].Recently we have introduced a new type of graph known as semi complete graph.Let be a connected graph, then is said to be semi complete if any pair of vertices in have a common neighbour.The necessary and su cient condition for a connected graph to be semi complete is any pair of vertices lie on the same triangle or lie on two di erent triangles having a common vertex [5]. In the present paper, we introduce a new graph parameter, the global neighbourhood domination number, for a connected graph .We call ( ) a global neighbourhood dominating set (gnd -set) of if is a dominating set for both .The global neighbourhood domination number is the minimum cardinality of a global neighbourhood dominating set of and is denoted by ( ). Example.Suppose is a graph representing a network of roads linking various locations.Some essential goods are being supplied to these locations from supplying stations.It may happen that these links(edges of ) may be broken for some reason or the other.So we have to think of maintaining the supply of goods to various locations uninterrupted through secret links(edges in the neighbourhood graph of ).As the neighbourhood graph of is a spanning subgraph of , the construction (maintainance) cost of secret links can be minimized, when compared with the complementary graph of .The global neighbourhood domination number will be the minimum number of supplying stations needed to accomplish the task of supplying the goods uninterruptedly. All graphs considered in this paper are simple, finite, undirected and connected.For all graph theoretic terminology not defined here, the reader is referred to [1]. In section [2], sharp bounds for are supplied for the graphs whose girth is greater than three.In section [3], we have given a characterization result for a proper subset of the vertex set of to be a gnd -set of and also characterized the graphs whose gnd -numbers are 1 2 1 2. Bounds for the global neighbourhood domination number In this section, we obtain some bounds for the gnd -numbers of graphs whose girth is greater than three.Proof: Let be a minimum gnd -set of .By hypothesis every vertex in is non adjacent with atleast one vertex in .Otherwise we get a contradiction to that is a gnd -set for . Let 1 2 ( ) be the neighbours of in .Then ( )} is a gnd -set of and its cardinality is ( ) + 1. From ( 1) and ( 2) Furthermore the lower bound is attained in the case of 4 and upper bound is attained in the case of 3 .Hence the bounds are sharp. Note: The upper bound holds good for any connected graph .be a connected graph and be a minimum dominating set of .If there is a vertex in such that is adjacent to all the vertices in , then ( ) 1 + ( ). Proof: Assume that ( ) for some .The proof follows from the fact that S { } is a gnd -set of . Theorem 2.3.be a minimum dominating set of .Then ( ) = 1 + ( ) i there is a vertex in satisfying: (i) ( ) , each of the vertices in ( ) is isolated in . Proof: By hypothesis, every gnd -set is a global dominating set in .Hence ( ) ( ). Note: Under the hypothesis given in the Theorem(2.10)and Theorem(2.9),we Theorem 2.6.be a connected graph.Then = i (i) Each edge in lies on 3 or 5 . (ii) There is no path of length four between any pair of non adjacent vertices in . (i) Let 1 2 be an arbitrary edge in , then by our assumption 1 2 is an edge in .Suppose 1 2 . Since 1 2 ( ), 1 2 lies on a cycle 3 in .Suppose 1 2 . Since 1 2 , there is a 3 in ( ) such that is a path in .This implies 1 3 3 2 ( ).So there is a path of length four from 1 to 2 in .Thus S { 1 2 } is a 5 -cycle in .Therefore 1 2 lies on 5 . Hence each edge in lies on 3 or 5 . (ii) If there is a path of length four between any pair of non adjacent vertices in , then there is an edge in which is not in .Hence 6 = , which is a contradiction.Assume that the converse holds.Let 1 2 be an arbitrary edge in .Then by (i) of our assumption 1 2 lies on 3 or 5 .In either case 1 2 is an edge in .Hence . Characterization and Other Relevant Results. In this section we have given the characterization for a proper subset of the vertex set of a graph to be a gnd -set.). So in either case 1 lies on the edge whose end points are totally dominated by the vertices in . Assume that the converse holds. Theorem 2 . 1 . If is a triangle free graph, then is obtained from 3 or 4 by adding zero or more leaves to the stems of the path.Theorem 2.2. Theorem 3.1.(Characterization Result) be a connected graph. is a gnd -set of i each vertex in lies on an edge whose end points are totally dominated by the vertices in .If 4 6 = 2 , then 1 lies on the edge 1 4 , where 1 is dominated by 2 and 4 is dominated by 3 ( 2 3
1,850.8
2014-03-01T00:00:00.000
[ "Mathematics" ]
Chemistry of Therapeutic Oligonucleotides That Drives Interactions with Biomolecules Oligonucleotide therapeutics that can modulate gene expression have been gradually developed for clinical applications over several decades. However, rapid advances have been made in recent years. Artificial nucleic acid technology has overcome many challenges, such as (1) poor target affinity and selectivity, (2) low in vivo stability, and (3) classical side effects, such as immune responses; thus, its application in a wide range of disorders has been extensively examined. However, even highly optimized oligonucleotides exhibit side effects, which limits the general use of this class of agents. In this review, we discuss the physicochemical characteristics that aid interactions between drugs and molecules that belong to living organisms. By systematically organizing the related data, we hope to explore avenues for symbiotic engineering of oligonucleotide therapeutics that will result in more effective and safer drugs. Introduction Nucleic acids were one of the most fundamental molecules generated on primitive Earth, and the evolution of molecular interactions between nucleic acids and their surrounding molecules represents the history of life. In recent years, "oligonucleotide therapeutics" have garnered interest as novel therapeutic modalities. Oligonucleotide therapeutics is a generic term for drugs with nucleic acids in their backbone. A wide variety of oligonucleotide-based therapeutics have been developed, including antisense oligonucleotides (ASOs), small interfering RNA (siRNA), aptamers, and decoys. Indeed, ASOs and siRNAs are synthetic nucleic acids designed to specifically modulate the expression, transcription, and translation of targets ( Figure 1a). Superior backbone modifications have improved (1) affinity and selectivity to the target RNAs, (2) in vivo stability, and (3) the primary immune response (Figure 1b). However, owing to the "sticky" nature of the naturally-occurring nucleic acid molecules, highly optimized oligonucleotide drugs demonstrate a variety of side effects owing to some specific interactions with biological molecules and, hence, have not yet been adopted for widespread use (Spinraza ® is the only blockbuster among all the nucleic acid drugs that have been launched). To reduce the side effects of oligonucleotide therapeutics and analyze their pharmacological effects, it is essential to foster a better understanding of their binding selectivity and specificity. Adapting the argument by Eaton et al. [1], the binding specificity (α s ) of nucleic acids can be expressed in terms of thermodynamics, as in the following Equation (1): where A represents ASO, T represents the target RNA, O i represents the binding molecule other than the target RNA, K T specifies the binding constant between A and T, K O i specifies the binding constant between A and O i , and [T] and [O i ] represent the respective component concentrations. Binding specificity (α s ) has no direct relationship with binding constant with the target (K T ), indicating that balance with off-target binding reactions is important. Since a broad range of biological components bind nonspecifically to nucleic acids with affinities in the order of 10 −6 M, it is assumed that oligonucleotides must have a dissociation constant of nM or less for adequately distinguishing between non-target and target molecules. Various interactions are observed in the cell, as shown in Figure 1c, and each could potentially affect the efficacy and safety of oligonucleotide therapeutics [2,3]. However, the fortification of off-target interactions of oligonucleotides by chemical modification has been key for successful improvements in the pharmacokinetics of oligonucleotide therapeutics. In this review, we classify and explain the interactions between nucleic acids (especially ASOs) and biomolecules by characterizing the process of the drug reaching target cells by interactions (1) in the blood, (2) on the cellular surface, and (3) inside the cell, to identify clues for achieving material symbiosis. Oligonucleotide Therapeutics with Phosphorothioate Modification Oligonucleotide therapeutics are now recognized as a reliable modality because of their improved thermodynamics attributed to chemical modifications. Oligonucleotide agents that enter systemic circulation from the site of administration are eliminated from the blood via various processes. Improvements in blood retention duration are important parameters that are directly related to availability in peripheral tissues. The half-life of natural oligonucleotides in blood is extremely short, ranging from 7.6 to 9 min [4]. The main route of elimination is degradation by nucleases; however, renal filtration (Mw < 40 kDa) also plays a significant role as the molecular weight of the agents is generally below the filtration limit (~5 kDa). Phosphorothioate (PS) modification, which replaces the nonbridging oxygen group with a sulfur atom, protects the phosphate backbone, which is a major target for nucleases, from hydrolysis. It increases the apparent molecular weight by improving interaction with the hydrophobic face of the protein, which is attributed to the hydrophobicity and polarization rate of sulfur. These characteristics associated with PS modification also allow oligo drugs to bypass renal filtration. This PS modification has been utilized for all ASOs and siRNAs, with a few exceptions (phosphorodiamidate morpholino oligomers [PMOs]). Therefore, PS modification is the de facto standard of systemically administered oligonucleotide therapeutics. At therapeutic doses, the blood protein binding rate of PS-modified oligonucleotide-based treatments is thought to be over 90%. Interactions between albumin, other blood proteins, and blood cell surface proteins have been confirmed [5]. The group led by Seth et al. at Ionis comprehensively and quantitatively analyzed the interaction of a gapmer-type PS-modified antisense nucleic acid containing 2 -O-methoxyethyl RNA (PS-MOEs) (Figure 1a), a sugar-modified RNA, with 25 major blood proteins using a fluorescence polarization (FP) assay [5]. The finding that the binding affinity of PS-MOE toward plasma proteins generally falls within the micromolar (10 −6 M) range of the dissociation constants is particularly important. In human or mouse plasma, PS-MOEs primarily bind to albumin present in abundant amounts with a dissociation constant (K d ) of approximately 12.7 µM and strongly bind to histidine-rich glycoprotein (HRG) with a K d of 0.009 µM. Species differences may exist because the composition of plasma proteins varies among animal species. In addition, the interaction between PS-MOEs and proteins is strongly influenced by the number of PS linkages, chain length, number of charges, the isoelectric point of proteins, pH, and salt concentration. More impressively, when comparing the dissociation constants of various proteins with PS-modified dA 20-mer and dT 20-mer, the latter tend to show values that are smaller by two orders of magnitude, which indicates stronger binding [5,6]. Seth et al. concluded that the "flexibility" of oligonucleotides is also a pivotal factor in determining the interactions between nucleic acids and proteins because PS dA 20-mer shows stronger base-pair stacking interactions than PS dT 20-mer. In contrast, side effects (such as a reduction in red blood cell counts, thrombocytopenia, prolonged activated partial thromboplastin time (aPTT), complement activation, and inflammatory reactions) observed in clinical trials can be attributed to the interaction between plasma components, such as blood cell components, clotting factors, and complement and nucleic acids, which are seemingly weak and minor [7][8][9]. While mipomersen is highly chemically modified to reduce lethal interactions, it shows a significantly prolonged elimination half-life in the blood (23-30 days), resulting in unintended long-term exposure in the blood [10]. Therefore, from the perspective of material symbiosis, it is necessary to scrutinize the introduction of new parameters, such as the duration of interactions and the unintended biological response elicited by such interactions (along with the biomarker search for quantitative evaluation), in addition to conventional parameters, such as dissociation constants and concentrations required for interactions. Accumulating evidence in clinical trials has demonstrated the production of anti-drug antibodies (ADAs) against PS-MOEs [11,12]. After treating humans and monkeys with PS-MOEs, such as mipomersen, inotersen, and drisapersen, production of ADAs ranging from 20-70% was observed. The PS-MOEs comprise many diastereomeric mixtures as active ingredients because of the chirality of the phosphorous atom. Interestingly, ADAs are synthesized specifically from these active ingredients, even though the concentration of each isomer in the PS-MOEs is very low. This suggests that ADAs may not recognize the specific structure of nucleic acids precisely and robustly. However, such recognition may be mediated via some "weak interactions". To the best of our knowledge, no report has demonstrated that production of ADAs reduces the effectiveness of oligonucleotide therapeutics. However, it is necessary to closely examine the mechanism of ADA production (e.g., T cell dependence), whether other chemical modifications of nucleic acids can also induce ADAs, and how ADAs recognize nucleic acids. In recent years, PS steric control has been examined [13][14][15], and the efficacy and safety of PS-modified nucleic acids with a single diastereomer have garnered significant interest [16,17]. Controlling PS conformation leads to minor differences in the orientation of the sulfur atom between the major groove and minor groove, affecting hydration, ion coordination, and the recognition of enzymes, such as RNase H. Progress in the research of nucleic acid chemistry is expected to enable more precise control of the interaction between nucleic acids and biomolecules. Phosphorodiamidate Morpholino Oligomer (PMO) Phosphorodiamidate morpholino oligomers have high nuclease resistance owing to the presence of electrically neutral and unnatural phosphate backbones called phosphorodiamidate linkages. However, the number of adverse events attributed to interactions with plasma proteins is lower than those observed with PS-MOEs because they generally do not interact with proteins as much as PS oligomers. However, PMOs tend to have lower tissue bioavailability and demonstrate effects at a higher dose (30-80 mg/kg/a week). The elimination half-life is approximately 2-15 h, which is shorter than that of PS-MOE [18,19]. Furthermore, their half-life in tissues is approximately 7-14 days. They are primarily excreted via urine in an unchanged form [20,21]. Three PMOs are commercially available, namely Vyondys53 ® , Viltepso ® , and Amondys45 ® . These RNase H-independent ASOs act on the mRNA-encoding dystrophin protein in Duchenne muscular dystrophy and partially restore the function of the defective dystrophin protein by skipping specific exons. Analogs, such as PPMO with cationic peptides and PMOplus with partial positive charges developed by introducing piperazine residues, have been developed [22,23]. Although PPMOs with large cationic tails show dose-dependent toxicity (such as coma and weight loss) [24,25], a method of regulating the kinetics by utilizing unitized structures has the potential to fine-tune the physical properties and avoid class effects. Peptide Nucleic Acids (PNAs) Nielsen et al. first reported a PNA with an electrically neutral aminoethyl glycine backbone in 1991 [26]. These PNAs are DNA mimics in which peptide-like backbones are substituted with negatively charged phosphodiester linkages. The PNA drug discovery has superior complementary DNA-or RNA-binding properties relative to its natural counterparts [27,28], and PNA-based drug discovery has uniquely enabled pharmaceutical development by targeting the RNA genome and oligonucleotide therapeutics. Because of its properties, PNA has been developed as a medicine; however, the desired therapeutic effect has not been observed because of the pharmacokinetics of PNAs [29]. The PNA only comprises four elements, namely hydrogen, oxygen, nitrogen, and carbon (HONC), and does not contain heavy atoms or other heteroatoms, such as fluorine. Elements relatively heavier than HONC, such as phosphorous, sulfur, and fluorine, have unique physicochemical and electronic properties and are essential for modulating molecular recognition in vivo. It must be noted that PNA differs from other nucleic acids comprising a phosphate backbone because it does not contain any of these elements and electrical charges. In this Section, we summarize the interactions between the oligonucleotide class of drugs and plasma proteins. For nucleic acid therapeutics other than those involving PSbased modifications, data on the types of plasma proteins/components and the strength of their interactions is insufficient. Further research is necessary in the future. Molecular Mechanisms of Cellular Intake Oligonucleotide drugs interact weakly (K d~1 0 -6 M) with carrier proteins in the systemic circulation, which enables the retention of compounds in the blood. It dissociates reversibly, and a free fraction can bind to the extracellular domain of the surface proteins on targeted cells. The fraction is subsequently absorbed and internalized. The PS-ASOs can interact (nonspecifically) with several types of membrane proteins, such as integrins, G protein-coupled receptors (GPCRs), receptor tyrosine kinases (RTKs), toll-like receptors (TLRs), epidermal growth factor receptors (EGFRs), scavenger receptor class B (SR-B), lowdensity lipoprotein receptors (LDLR), and asialoglycoprotein receptors (ASGPR) [30,31]. These receptors quickly retrieve PS-ASOs. It proceeds independently with energy, and the process can be saturated. The internalization process is affected by the type of membrane protein and the hardness of the lipid raft. Most of the aforementioned membrane receptors described are involved in clathrin-mediated endocytosis. Oligonucleotide drugs incorporated into this "productive" pathway efficiently translocate to the cytoplasm and rapidly localize to the nucleus and exhibit antisense activity. Integrin-mediated internalization and CLIC/GEEC pathways are also included in the production. In contrast, PS-ASOs consumed by high-capacity macropinocytosis tend to localize in lysosomes, which inhibits the antisense activity; hence, this process has been characterized as a "nonproductive" pathway. Little is known about how PS-ASOs escape from endosomes after internalization via endocytosis. Nevertheless, some molecules have been identified that explain the trafficking of ASOs. For instance, ANXA2 localizes with PS-ASOs during endosome maturation. (They do not seem to interact directly with each other). In the absence of ANXA2, PS-ASOs remain in the primary endosome, resulting in a decrease in ASO activity. This explains why the protein may help promote drug escape to the cytosol when transported to late endosomes [32]. Similarly, GTPase RAB5C, a factor associated with the fusion of vesicular membranes, is essential for the uptake of PS-ASOs [33]. Lysophosphatidic acid (LBPA) could be an important player that helps ASOs escape from endosomes to the cytosol at a later stage [34]. Furthermore, LBPA is a phospholipid present in the inner membrane that is responsible for mass transport in and out of vesicles. A comparison of PS-ASO activity in various cancer cell lines showed that ASO knockdown tended to be lower in cell lines with a higher migration of PS-ASO to lysosomes, which is considered indirect evidence of escape into the cytoplasm during the early-late endosomal stage. This finding indirectly elucidates that during maturation, PS-ASOs escape into the cytosol from the primary endosome during subsequent stages [33]. Thus, it is assumed that the interaction with membrane surface proteins of the target cell (direct interaction between membrane lipids and therapeutic nucleic acids has not been reported) triggers the migration of oligonucleotides into endosomes via various endocytosis pathways and escape into the cytoplasm from late-stage endosomes. It is surprising that large negatively charged molecules, such as nucleic acids (although smaller than proteins), can escape directly into the cytoplasm, since large molecules, such as proteins, cannot escape from endosomes and are generally transported to lysosomes for degradation. In contrast, only approximately 0.1% of ASOs are able to enter the cytoplasm via these two processes [35]. Delivery of Nucleic Acid via Specific Molecular Interactions Through the aforementioned nonspecific interactions between PS-ASO and proteins, it is clear that PS-ASO is distributed to and consumed by various tissues and organs throughout the body and exhibits knockdown activity [36]. In contrast, owing to concerns about their efficiency and side effects, targeting techniques have been developed to increase the selectivity of their distribution to target tissues [37]. In particular, we would like to focus on studies, including ours, on the ligand-targeted drug delivery (LTDD) method, which involves conjugation of a ligand for the engagement of therapeutic oligonucleotides with membrane proteins specifically expressed on the target cell surface. Asialoglycoprotein Receptor (ASGPR) The ASGPR lectin is one of the lectins discovered in the initial stage and is highly expressed in hepatocytes (0.7-5 × 10 5 molecules per cell). Serum glycoproteins are promptly transported to the liver via receptor-mediated endocytosis. The major subunit H1 and minor subunit H2 form heterooligomers on the cellular membrane (Figure 2a) [38]. The receptor recognizes glycan proteins with N-acetyl galactosamine (GalNAc). Owing to the nature of GalNAc, its conjugation has been considered as a methodology for the hepatic drug delivery system (DDS) of therapeutic oligonucleotides. The affinity of GalNAc for ASGPR is estimated to be very low (dissociation constant, K d~4 0 µM) [39]. Multivalency improves the binding affinity by the order of 10 −9 M [40]. The first report on conjugates was published by Ts'o et al. in 1995 [41]. Since then, the activity of therapeutic oligonucleotides has been improving. In 2014, Prakash et al. showed that the binding of trimeric (triantennary-type) GalNAc ligands to ASOs with MOE or constrained ethyl bridged nucleic acid (cEt) can improve their knockdown activity in the liver by approximately 10-fold [42]. Similarly, a triantennary GalNAc-conjugated siRNA improved this effect [43]. The GIVLAAR ® and OXLUMO™ drugs, which have trivalent GalNAc, were the first approved drugs (siRNAs) worldwide. In this context, we investigated the previously hypothesized trimeric ligand model [44]. Many previous studies have only evaluated the affinity of GalNAc with different valences for the ASGP receptor or its effect on cellular uptake, and the in vivo activity has not been adequately evaluated until recently. Hence, we developed a monomeric GalNAc phosphoramidite unit, with which we can freely change the ligand valency, and introduced it into PS-ASO equipped with bridged nucleic acids (2 ,4 -BNA/LNA). Unexpectedly, a remarkable improvement in knockdown activity was observed with the introduction of only one GalNAc (Figure 2b,c) [45][46][47]. This suggests that the univalent GalNAc targets are clustered with ASGPR. This weak interaction may be favorable for efficient turnover of the ASO influx cycle into the liver. Glucagon-Like Peptide-1 Receptor (GLP1R) The GLP1R is a GPCR that belongs to the secretin receptor family and is highly expressed, especially in the pancreas. In the pancreatic islets of Langerhans, they are present in insulin-secreting cells and somatostatin-secreting β cells. When GLP1R is activated, it is rapidly consumed by the cell and subsequently recycled. Ämmälä et al. have developed a GLP1R peptide agonist (eGLP1R) that fuses human GLP1 with a GLP1-like peptide fragment named exenatide [48,49]. The GLP1R peptide agonist (eGLP1), which is a fusion of human GLP1 and exenatide, was conjugated to ASO via biodegradable phosphate diester and disulfide bonds [48,49]. Typically, the binding of GLP1R with the GLP1 peptide is very strong (K d~0 .5 nM), and this eGLP1 conjugate also demonstrates a potent activity with the GLP1 receptor. The eGLP1-mediated ASO showed no activity in other organs, including the liver, and showed pancreatic-cell-selective target gene knockdown. The eGLP1 pathway is considered to be a productive pathway. This pioneering study demonstrated delivery outside the liver using a ligand conjugation method. The potential impact of the remaining agonist activity on the ligand in the system should be examined in future studies. Glucose Transporter (GLUT1) Miyata et al. focused on the glucose transporter GLUT1, which is highly expressed in the brain, for the delivery of therapeutic oligonucleotides to the brain [50]. The GLUT1 transporter is abundant in the endothelium of brain capillaries and migrates from the apical to the basal side in response to blood glucose. Based on this principle, they modified the surface of nanomicelles with glucose at various densities via the hydroxyl group present at the sixth position. At an optimal density, the encapsulated PS-ASO could be delivered into the brain parenchymal cells very efficiently and showed high knockdown activity. At this time, approximately 6% of the administered nanomicelles migrated into the brain. Notably, the dissociation constant of GLUT1/Glucose at 3 mM was very high. Again, by utilizing the multivalency effect, a weak interaction is converted into a strong one while ensuring specificity, and high levels of activity were successfully observed. Specific Delivery Using Different Kinds of Cell Surface Receptors Exosomes are extracellular vesicles with a diameter of approximately 100 nm that contain a variety of proteins and nucleic acid molecules and serve as carriers for transporting these biomolecules between cells. Various exosomal surface antigens are known, including tetraspanin proteins (CD63, CD9, and CD81), cell adhesion molecules (integrin and ICAM-1), and HLA-G, an MHC class I molecule. The intracellular uptake of exosomes released into the blood is cell-directed according to the expression patterns of these antigens. Because of these characteristics, they have garnered interest as nanocarriers that can encapsulate drugs, such as liposomes, and deliver them organ-selectively. The most distinctive features of ExomiR-Tracker [51], a DDS technology we have developed, are listed as follows: (1) there is no need to isolate and purify exosomes, and (2) there is no need to encapsulate therapeutic nucleic acids inside the exosomes. Specifically, ExomiR-Tracker is a nucleic acid-antibody complex conjugated with an anti-Exo antibody that binds to exosome membrane surface proteins and anti-miRNA-ASO. We successfully delivered this complex into the bloodstream to capture target exosomes floating in the blood and effectively delivered therapeutic oligonucleotides to targeted cancer tissues. This study demonstrates that it is possible to achieve high target selectivity and to provide the higher level of functionality necessary to achieve material symbiosis by successfully hijacking a superior intrinsic system, such as exosomes, by nestling in and hiding from it, similar to a clown shark. Intracellular Kinetics of Nucleic Acid Drugs and Their Regulation An interesting study of the intracellular dynamics of PS-ASO was reported by Mundigle et al. [52]. When PS-ASO was introduced directly into the cytoplasm of MCF-7 cells by microinjection, most PS-ASO accumulated in the nucleus within five minutes (no migration into the nucleolus was observed). Since depletion of cellular ATP had no effect on this rate, it was concluded that nuclear migration is energy-independent and can be attributed to passive transport. The PS-ASO enter the nucleus during their free movement through the cytoplasm, where they are shackled. In this Section, we will examine the interactions between nucleic acids and substances in the cell that have a major impact on the drug efficacy, toxicity, and kinetics of therapeutic nucleic acids. Interactions between Nucleic Acids and Organelles Mitochondria-Stein et al. reported that PS-ASO may interact with mitochondria from the outer membrane side (K i~0 .2 to 0.5 µM) with mitochondrial voltage-dependent anion channels (VDAC) and inhibits mitochondrial respiration. This may trigger the release of cytochrome c and induce apoptosis (Figure 3a) [53,54]. Moreover, the effect of this 2′-OMe RNA introduction appears to be adaptable to PS-cEt ASOs with various toxicities, which seems to be the main cause of tissue damage induced by these RNase H-dependent ASOs. Independent corroborative studies, examining the generality of the effect of the modification of this gap region, would be interesting. In cells, many droplet-like organelles are composed of these RNA-protein complexes due to phase separation phenomena, and it is thought that therapeutic nucleic acids interfere with their functions. We will continue to monitor the effects of other chemical classes of therapeutic nucleic acids on the function of these organelles. Interactions with Proteins The PS-ASO interacts with a variety of functional proteins in the cytoplasm and blood. In particular, Crooke et al. at Ionis have conducted quantitative and detailed analyses of the interaction of PS-ASO with proteins for many years (Table 1) [31]. From HeLa cell lysates, 58 proteins that bind to PS-ASO were identified by MS/MS analysis [57,58]. The proteins identified in this study were mostly known nucleic acid-binding and chaperone proteins. Crooke et al. performed thorough knockdown experiments to identify those that affect antisense activity and kinetics. Among these, HSP90 was suspected to be involved in this The interaction of PS-ASO with VDAC causes VDAC closure, which facilitates the release of cytochrome c through some not clearly identified mechanism (modified based on references [53,54]). (b) Toxic mechanisms of PS-ASOs mediated by interactions with paraspeckle proteins. Toxic PS-ASOs tightly bind intercellular proteins. The tight interactions can cause paraspeckle protein mislocalization to the nucleolus in a RNase H1-dependent manner, and can infect pre-rRNA synthesis, causing nucleolar stress and inducing apoptosis (modified based on reference [55]). (Figure 3b) [55,56]. Surprisingly, replacing a portion of the gap region created from the central DNA of the toxic PS-cEt ASO with 2 -OMe RNA reduces its interaction with the P54nrb protein and prevents toxic effects. Moreover, the effect of this 2 -OMe RNA introduction appears to be adaptable to PS-cEt ASOs with various toxicities, which seems to be the main cause of tissue damage induced by these RNase H-dependent ASOs. Independent corroborative studies, examining the generality of the effect of the modification of this gap region, would be interesting. In cells, many droplet-like organelles are composed of these RNA-protein complexes due to phase separation phenomena, and it is thought that therapeutic nucleic acids interfere with their functions. We will continue to monitor the effects of other chemical classes of therapeutic nucleic acids on the function of these organelles. Interactions with Proteins The PS-ASO interacts with a variety of functional proteins in the cytoplasm and blood. In particular, Crooke et al. at Ionis have conducted quantitative and detailed analyses of the interaction of PS-ASO with proteins for many years (Table 1) [31]. From HeLa cell lysates, 58 proteins that bind to PS-ASO were identified by MS/MS analysis [57,58]. The proteins identified in this study were mostly known nucleic acid-binding and chaperone proteins. Crooke et al. performed thorough knockdown experiments to identify those that affect antisense activity and kinetics. Among these, HSP90 was suspected to be involved in this mechanism as its antisense effect was attenuated by knockdown. Detailed analysis revealed that PS-ASO binds to and enhances the activity of artificial nucleic acids with high hydrophobicity on the 5 side. Other factors, namely Ku70, Ku80, P54nrb, and hnRNPs, were found to act competitively with RNase H1 and inhibit its antisense activity. Furthermore, they developed a unique interaction analysis system based on the nanoBRET system to effectively evaluate the stoichiometry ratio and dissociation constant of the binding between PS-ASO and target proteins [59]. The nanoBRET system uses the interaction between a luciferase fusion protein called NanoLuc and fluorescently labeled PS-ASO. This analysis revealed that the major PS-ASO-binding proteins described above have a binding strength of approximately 10 −9 M. In addition to the fact that large differences can be observed in dissociation constants due to differences in sugar moiety modifications, significant results providing quantitative information on binding were obtained, including the possibility that K d can vary by approximately 1000-fold for the same PS-cEt chemistry depending on the sequence. Crooke et al. also actively investigated the impact of these factors on toxicity, focusing on whether colocalization was observed in the cells [55,60]. They also examined the impact of knockdown. For example, they believed that DDX21, P54nrb, and PSF [55] may contribute to toxicity. Ionis et al. were the first to show that many of these intracellularly bound proteins show strong interactions at approximately 10 −9 M [59]. It is not surprising that some of these proteins act as switches that turn on signals for toxicity. According to Crooke et al., the drawback of this proteomic analysis system is that only proteins that are relatively abundant and tightly bound to the cell can be observed [59]. In addition to constructing a system that can evaluate the binding of even small amounts of proteins, further studies are needed to develop a method for the analysis of proteins that play important roles through weak interactions with therapeutic oligonucleotides. Interactions with Intracellular Nucleic Acids The ASOs and siRNAs act on RNA. The stronger the binding of the drug with the receptor, the more potent is the expected pharmacological effect. From this perspective, the discovery of bridged nucleic acids has made it possible to remarkably increase the binding affinity with target RNAs by incorporating BNAs onto PS-ASO, thereby achieving clear in vivo drug efficacy enhancement [61]. Many BNA analogs have been developed and are widely used in medicine. However, strong tissue damage was observed in scattered cases of PS-ASO, especially with analogs of BNA, leading to the hypothesis that the toxicity may be caused by hybridization-dependent off-target knockdown, where ASO binds to non-target RNA and mediates expression suppression. This is based on the fact that there is a correlation between the ability to bind to target RNA and the frequency of the observation of toxic effects [62], where toxicity decreases when RNase H1 is deleted (unintended RNA cleavage no longer occurs) [63], and the higher number of complementary target sequences is correlated with higher toxicity [64]. Conclusions Our goal is to generalize and describe the physicochemical properties required of therapeutic oligonucleotides in order to realize the "material symbiosis" between living organisms and oligonucleotide drugs. From this perspective, this review provides an overview of the various interactions of oligonucleotides in vivo. We provide a glimpse of the improvements made in the efficacy, pharmacokinetics, and safety of therapeutic oligonucleotides through efforts to quantitatively capture the molecular interactions of oligonucleotide drugs without compromising their therapeutic effects. In particular, for material symbiosis with therapeutic nucleic acids, it seems necessary to pay attention to the "quality" of binding with non-target molecules, in addition to the concept of binding specificity mentioned at the beginning of this article. This issue of safety hinders their practical application. Whether toxicity is hybridization-dependent or independent is hotly debated, and both theories are persuasive. Further experimental support or breakthrough technologies need to be developed in the future. Author Contributions: Conceptualization, A.Y. and T.Y.; investigation, C.T. and T.Y.; data curation, T.Y. and C.T.; resources, T.Y., C.T. and S.K.; original draft preparation, C.T. and S.K.; supervision, T.Y. and A.Y.; writing-review and editing, T.Y. and A.Y. All authors have read and agreed to the published version of the manuscript. Conflicts of Interest: The authors have no financial conflict of interest disclose concerning the work.
6,780.2
2022-11-29T00:00:00.000
[ "Biology" ]
Optimization of Surface Roughness of AISI P20 on Electrical Discharge Machining Sinking Process using Taguchi Method Received : 28-10-2020 Revised : 11-02-2021 Accepted : 28-02-2021 Online : 15-04-2021 This research aims to obtain optimal value for the surface roughness of the material AISI P20 on the Electrical Discharge Machining (EDM) Sinking process. In the present research, the Taguchi method is used to investigate the significant influence of process variables on the machining performance and determine the combination of process variables on the EDM process. Orthogonal array L18 (21 × 33) based on the Taguchi method is chosen for the design of experiment. The experiment is replicated twice to finding out the influence of four process variables such as type of electrode, voltage gap, on-time, and off-time on the response performance. Machining performance is evaluated by surface roughness as a response variable that had quality characteristics, smaller is better. These experimental data were analyzed using the Signal-to-noise ratio and Analysis of Variance. The analysis results show that the surface roughness is influenced by the type of electrode and on time. Combination of process variables to obtain optimal surface roughness are using graphite electrodes, and setting values of gap voltage 40 volt, on-time 250 μs, and off-time 20 μs. This combination of process variables can be applied to the manufacturing process using EDM sinking in order to produce a good quality product that determined based on the surface roughness value. Keyword: . EDM can effectively machine on conductive material that have a high level of hardness which is difficult to machine with conventional machining processes and is able to produce products with good surface quality and high precision (Bose & Mahapatra, 2014). EDM Sinking is a type of EDM used in the manufacturing industry to produce moulds. In the EDM Sinking process, the electrons move towards the workpiece in the direction of the z-axis (Jahan, 2019). In the manufacturing industry, determining the combination of machining parameters is an important factor. The right machining parameters give optimal machining performance results and can improve product quality. Meanwhile, in the EDM process, there are many parameters that significantly influence the results. Therefore understanding the influence of various factors on the EDM process is very important, analysis by statistical methods can be used to select the best combination of process parameters to obtain optimal machining performance. Taguchi method is a method of optimizing the process of experiment in the field of engineering. The approach using Taguchi method has been applied in several industries and has increased the quality. The fundamental of the Taguchi design method is to identify the parameter settings which improve the quality of the product or process robust to variations of noise factor (Chen et al., 2013). Taguchi method can ensure the quality of the start of the design phase. Advantage of implementing the Taguchi method is it can find influencing factors in a shorter period of time, thereby reducing processing costs (Asiltürk & Akkuş, 2011). In the EDM sinking process, material eroded occurs on the workpiece. As the frequency of sparks increases, it produces deep craters which increase the surface roughness (Jahan, 2019). Surface roughness plays a very important role in any manufacturing process to identify the quality of the surface produced referring to time and cost. The better quality of surface roughness is seen from the lower surface roughness value. This is the fundamental for conduct research to improve quality, one of which is by applying the design of experiment Taguchi method to optimize the factors in the machining process. Several experimental studies have been conducted until nowadays to investigate the influence of different parameters on surface roughness for some of materials used in the manufacturing industry. Choudhary et al (2013) investigates EDM process input parameters such as electrode material, current, and pulse on time to surface roughness and material removal rate on 316 stainless steel. The Taguchi method with L9 orthogonal array and ANOVA was applied to analyze the experimental results. MRR increases with higher current values. The type of electrode is the most influencing factor on MRR, then current and the last is pulse-on time. Meanwhile, the factor that most influences the surface roughness response is the type of electrode, then the current and pulse-on time. For better SR results it can be obtained with lower current values. Other research was also conducted by Babu & Soni (2016) on the influence of process parameters and optimization of M300 steel on EDM die sinking using the L9 orthogonal array based Taguchi method. The research results has been found that voltage, current and pulse on time give an important role in the EDM process. Its contribution and the influence of each parameter on surface roughness was determined using Analysis of Variance (ANOVA). The most effective parameter for surface roughness is current followed by voltage and pulse on time. The optimum surface roughness can be achieved with a combination of parameters A1-B1-C2, namely voltage 80 V (level 1), current 0.5 A (level 1), and pulse on time 1.6 μs (level 2). Chandramouli & Eswaraiah (2017) focused on using the Taguchi method to optimize the machining process parameters of 17-4 PH steel with copper-tungsten electrodes. The results of this study indicate that the selection of appropriate input parameters such as discharge current, pulse on time, pulse off time, and lift time plays an important role in the Electrical Discharge Machining process. Pulse on time and discharge current parameters show a significant influence on Surface Roughness (SR), while pulse off time parameters have no significant influence compared to other parameters. ANOVA results showed that pulse on time had the highest percentage contribution to SR is 76.7%. The optimal combination of input parameters and their levels to minimize surface roughness is A1-B3-C1-D1. Confirmation experiments are carried out in order to validate optimal machining parameters. SR decreased from 9.78 µm to 2.89 µm there was an increase of 70.4%. Therefore, significant improvements in Surface Roughness (SR) can be achieved with the Taguchi method approach. Bahgat et al (2019) conducted research by applying the Taguchi method to produce a high material removal rate (MRR) with low surface roughness (SR) and low electrode wear ratio (EWR) in H13 die steel. The experimental design of L9 orthogonal array was applied with various parameters such as peak current (Ip), pulse on time (Ton), and type of electrode. The electrode material is made of graphite, copper, and brass which are commonly used in EDM machining. Based on the ANOVA results, the peak current (Ip) using copper electrodes is the most influential factor on EWR and MRR. Surface roughness is significantly affected by the pulse on time using brass electrodes. Parameters for maximum MRR use copper with a peak current of 14A and a pulse on time of 150 μs. The optimum parameter for EWR uses copper with peak current 2A and pulse on time 150 μs. Meanwhile, to produce better surface roughness using brass with a peak current of 2A and a pulse on time of 50 μs.. Experiments conducted by Nagaraju et al (2020) with various process parameters of Electrical Discharge Machining (EDM) such as Discharge Current, Voltage, Inter-Electrode gap, Pulse on time to examine the surface roughness response on 17-7 steel. The experimental design used the L9 orthogonal array. After conducting the experiment, the responses were calculated and the results were analyzed on the Minitab software by applying the Taguchi technique. The optimal combination of parameters to obtain low surface roughness is a current of 8 A, a voltage of 40V, an Inter electrode gap of 150 µm, and a pulse-on time of 600µs. After performing the Taguchi experiment the surface roughness decreased from 1.85 µm to 1.57 µm. The results of these studies also indicate that the application of the Taguchi method increases the quality of the workpiece surface roughness. The literature study revealed that each researcher used a different combination of process parameters. They applied design of experiment then analyzed the experimental results statistically to produce optimal machining performance. Besides that, the EDM sinking machining process, the characteristics of each electrode material and workpiece are different. This is because each material has a different composition. Therefore it is necessary to know the suitable material for each machining process with the parameters set to produce the expected output, such as the best surface quality. Present research focuses on determining optimal parameters for the EDM process on AISI P20 in order to produce low roughness values. The experimental design used the Taguchi method and the responses were analyzed using the S/N ratio and ANOVA for evaluating the influence of each parameter on the machining performance. B. METHODS 1. Taguchi Method The Taguchi method is one of the methods applied for the purpose of improving product quality, by finding out the factors that influence the quality, then separating them into control factors and uncontrollable factors or noise factor. This method uses an orthogonal array to adjust the factors that affect the process and level. Then the factors and levels will be varied. Only necessary data is collected to determine which factor most influences the results with a minimum number of experiments, thus saving time and resources (Razak et al., 2016). According to R. Choudhary & Singh (2018) the analysis of the taguchi method is expected to achieve the purpose of analyzing influencing variables, analyzing responses in optimal conditions, and minimal experiments being carried out to get the desired results. Taguchi used the ratio of signal-to-noise (S/N) as a selection of quality characteristics. The S / N ratio is a quality indicator used to evaluate the effect of certain parameters on process performance (Asiltürk & Akkuş, 2011). In this method there are two word, that is the word 'signal' represents the expected (mean) value of the response characteristics and the word 'noise' represents the undesirable value, that is the standard deviation (SD) of the response characteristics. Therefore, the S/N ratio is the ratio of the mean to standard deviation (Chandramouli & Eswaraiah, 2017). Based on the response characteristics, there are three types of S / N ratios, they are nominalthe-best, smaller-the-better and larger-the-better.. Surface roughness response has quality characteristics smaller-the-better according to equation (1). Where n is the number of replications, and y is experimental data (Ikram et al., 2013). Furthermore, ANOVA is used to statistically analyzed the influence of process parameters on the resulting response. ANOVA consists of degree of freedom, sum of square, mean of square, and F-ratio. The F-ratio determines whether the process parameter is significant or not significant at a certain level of confidence. According to Vikas et al (2014) the higher F-ratio value indicates that process parameters have a significant effect on response studied. Experimental Work The material used as a workpiece in the experiment is AISI P20 with dimensions of 25 mm × 25 mm and a thickness of 20 mm. The electrode material is made from copper and graphite. The machining process is carried out with depth of 0.5 mm. Ariztech ZNC LS 550 EDM sinking machine is used in the experiment as shown in Figure 1. While the measurement of surface roughness was carried out using the Surfest Mitutoyo SJ-310. Design of Experiment In this research, the process variables used were the parameters of type of electrode, gap voltage, on time, and off time. Meanwhile, the response variable was surface roughness. This research uses two levels for the type of electrode, three levels for voltage gap, on time, and off time. The determination of the value at each level of the process variable refers to the capabilities and specifications of the EDM machine. In addition, it also considers the parameters suggested by the machine and looks at data from previous research results. Table 1 shows the value of each process variable include its level. Determination of the orthogonal array based on the number of parameters, degrees of freedom, and levels. The total degrees of freedom will determine the appropriate orthogonal array (OA) selection to conduct the experiment (Ariffin et al., 2014). The following must be fulfilled: Number of degrees of freedom OA ≥ Number of degrees of freedom at the factor level under study, to ensure that the selected OA design will provide adequate degrees of freedom to the experiment being carried out . The calculation degrees of freedom of the factor in equation (2) while for orthogonal arrays using equation (3). The orthogonal array suitable to this experiment is L18 (2 1 ×3 3 ) which is shown in Table 2. The total degree of freedom of the factor is 7, while degrees of freedom for orthogonal array of L18 is 17 greater than the degrees of freedom at the factor level. Then the experimental design may be declared eligible. The orthogonal array has one parameter with two levels and three parameters with three levels. There were a total of 18 combinations with replication twice and randomized were carried out in order to equalize the influence of uncontrollable factors (noise factor) for each treatment. Signal-to-noise (S/N) ratio and the Analysis of Variance (ANOVA) for optimization process based on the Taguchi method was performed by software Minitab 17. This software is often used in the fields of mathematics, statistics, economics, sports, and techniques to do analysis statistic and quality improvement (Günay & Yücel, 2013). C. RESULT AND DISCUSSION 1. Data Collection Measurement of surface roughness using the Mitutoyo SJ-310 Surface roughness tester with a sample length of 0.8 mm. Roughness measurement in horizontal, vertical and diagonal directions (45 ° and -45 ° angles) on the surface of the machined workpiece. Table 3 shows the surface roughness values for each combination of one and two replication experiments. The roughness values for the first and second replications are the average of the four-way measurements. The average value is expressed as surface roughness in units of μm. Data Analysis The characteristic of surface roughness response quality is smaller the better because the surface roughness result is expected with minimum value. The calculation of S/N ratio smaller the better according to equation 1. The results of the calculation of the S/N ratio can be seen in Table 3. Based on the data in the table, the minimum S/N ratio for surface roughness is in combination 1 that is -13.7194 μm and the maximum. is in combination 15 with the value of -5.6305 μm Based on the analysis of the S/N ratio calculation, a larger S/N ratio results in accordance with the ideal performance characteristics regardless of the category of performance characteristics. This means that the largest value indicates the best combination to achieve optimal results (Mohanty et al., 2019). Furthermore, the analysis of variance (ANOVA) was calculated using Minitab 17. Degree of freedom (DF), sum of square (SS), mean of square (MS), F-value, and P-value are shown in Table 4. Based on ANOVA results, hypothesis testing was carried out on each factor to know he significance of factors. Hypothesis testing uses F Test by comparing the F-Value with the F- Table. F-value is defined as the variance ratio caused by each factor and the error. The Ftable with the confidence level used is 95% with α 0,05. The testing hypothesis as follows: H0: Factor has no influence on the response H1: Factor influence the response If the F-value is greater than the F-table, the null hypothesis (H0) is rejected and it can be concluded that the factor influence the response. Otherwise, if the F-value is smaller than the F-table, the null hypothesis (H0) is accepted, it can be concluded that the factor has no influence on the response. Based on Table 5, the results of the testing on F-table and F-value, the parameters of the type of electrode and on time have a significant influence on surface roughness. On the other hand, off time and gap voltage do not have a significant influence on surface roughness. After calculating the S / N ratio and ANOVA, it is continued to determine the factors and levels to produce optimal surface roughness. This calculation refers to Table 3, that is data f S/N ratio. Determination of optimal conditions by looking at the largest mean S/N ratio for each factor level . The results of calculating the mean value for each parameter can be seen in Table 5. Table 5 shows that the maximum mean of the type of electrode parameter at level 2, the voltage gap parameter at level 2, the on-time parameter at level 3, and the off-time parameter at level 1. Delta is the difference between the largest and the smallest values. While rank based on the delta. Ranking from the largest delta value to the smallest. Rank 1 is the most influential factor namely, the type of electrode, followed by rank 2, that is on time, rank 3 which is offtime and the last is rank 4, namely gap voltage. The following is a graph for the optimum condition parameters which can be seen in Figure 2. Surface roughness (SR) is defined as the irregularity of the surface contours of workpiece. The surface contours resulting from the machining process are in the form of craters on a surface formed by the discharge current. Increased surface roughness means that larger and deeper craters are formed on the surface of the workpiece. Otherwise, the forming of a small crater on the surface of the workpiece produces a better surface (Gopalakannan & Senthilvelan, 2012). The higher value of on time, the longer current flows to the workpiece, and therefore the resulting craters will be wider and deeper (Jahan, 2015). As a result, increases surface roughness. According to the result obtained in this research, a higher value of on time is 250 μs can produce optimal surface roughness. One of the reason for this could be the transfer of energy hinder by the plasma formed in the gap between the electrode and the workpiece thus forming small craters (S. K. Dewangan, 2014), The characteristics of each electrode material and workpiece are different. This is because each material has a different composition. Thus, the selection of electrode material is important in this process (Bahgat et al., 2019). In the present research, the suitable electrode material for producing better surface roughness is graphite. Off time value is 20 μs that the smallest value and gap voltage with a value of 40 volt to get surface roughness in optimal condition. There is no spark during the off time because the current flowing through the electrode and the workpiece is cut off (Jahan, 2015). During off time, the dielectric liquid to flush away the debris. The better flushing condition during the EDM process, the less off-time that required for machining, the result is higher efficiency of the entire EDM process (Jahan, 2015). D. CONCLUSION AND SUGGESTIONS In this research, design of experiment using Taguchi Method and analyzed carried out by S/N ratio and ANOVA to obtain combination of process parameter on EDM sinking to produce optimal value for the surface roughness of material AISI P20. The conclusions for this research are to get the optimal surface roughness with smaller-the-better quality characteristics using a graphite electrode, and setting parameter gap voltage of 40 volt, on time of 250 μs, and off time of 20 μs. With level of confidence 95% (α = 0.05) , it is known that the parameters of the type of electrode and on time have a significant influence on the surface roughness. while the off time and voltage gap did not significantly influence to the surface roughness. In this research, several process variables were considered constant. For further research, observations can be made of the same response by adding other process variables.
4,578
2021-04-17T00:00:00.000
[ "Business", "Materials Science" ]
The Grounding of Identities A popular stance amongst philosophers is one according to which, in Lewis’ words, “identity is utterly simple and unproblematic”. Building from Lewis’ famous passage on the matter, we reconstruct, and then criticize, an argument to the conclusion that identities cannot be grounded. With the help of relatively uncontroversial assumption concerning identity facts, we show that not all identities are equi-fundamental, and, on the contrary, some appear to be provided potential grounding bases using two-level identity criteria. Further potential grounding bases for identities are presented. Identity might be utterly simple and unproblematic, but this is not sufficient to conclude that identities are ungrounded, or fundamental. "[i]dentity is utterly simple and unproblematic. Everything is identical to itself; nothing is ever identical to anything else except itself. There is never any problem about what makes something identical to itself; nothing can fail to be. And there is never any problem about what makes two things identical; two things never can be identical." 1 This passage is widely acknowledged and helped built a somewhat silent consensus concerning the identity relation: given a standard notion of identity, no analysis nor philosophically interesting story about identities can ever be given. One can, of course, provide an interesting story as to how two terms came to codesignate, but that is a distinct matter: there is something interesting to tell about how "Hesperus" and "Phosphorus" came to designate the same thing, but that Hesperus is Phosphorus is a basic fact which deserves no further investigation. This attitude is in fact so ingrained that is not often implicitly stated; we will call it the "Knee-Jerk Reaction". 2 In this paper, we will argue that the Lewis' passage fails to establish a clear motivation in favor of the Knee-Jerk Reaction, and in fact offers a wide variety of counterexamples involving identities. We will not assume anything peculiar about identity: identity is the relation that everything has with itself and nothing else, or the smallest equivalence relation -whose classes of equivalence are singletons. Similarly we will assume that identities are necessary. No funny business there. We will, however, make some more substantive assumptions cashing out talk of "making it the case" in Lewis' passage: in what follows, we will employ the notions of metaphysical grounding and fundamentality to properly articulate the Knee-Jerk Reaction. 3 Other ways to make sense of it will wait for another occasion. In the next Section, we will introduce the basics of metaphysical grounding, and reconstruct Lewis' passage into an argument to the conclusion that identities, insofar as they are necessaries, cannot be grounded. The crucial premise that necessities cannot be grounded will be criticized. In Section 3, a different argument against the grounding of identities might be formulated by noticing not the necessity of identity, but its fundamentality. This argument also will be criticized, based on the crucial idea that identity facts are not all equifundamental; on the contrary some appear to be very derivative, and thus may be offered grounding bases with the help of the so-called two-level identity criteria. Finally, in Section 4, additional potential grounding bases for identities will be offered, eventually leading to the conclusion that if the Knee-Jerk Reaction against the analysis of identity wants to be defended, metaphysical grounding and fundamentality are not the right tools for the job. 1 Lewis (1986: 192-193). 2 E.g., Lowe and Noonan (1988: 80-81), Williamson (1990: 144-145), Jubien (1996), Block and Stalnaker (1999: 24), Salmon (2005: 153), Kim (2008: 102), Horsten (2010), andFine (2016). 3 Other ways to make sense of it will wait for another occasion. (At this juncture, it might be worth pointing out that we are not interested in the exegetical reconstruction of Lewis' thought process on the matter: on the contrary we will happily help ourselves with resources that he may or may not had intended to deploy.) No Grounding Necessities Someone familiar with recent developments in metametaphysics may think that the presence of "what makes it the case" locutions in Lewis' passage betrays the presence of metaphysical grounding. We willin a nutshelltake metaphysical grounding to be a non-causal determinative or explanatory connection, a strict partial order of relative fundamentality. 4 With that in mind, Lewis' passage may be reconstructed as an argument scheme (p being, in this instance, a placeholder for names of sentences). (P1) If p is a true identity sentence, it is necessary that p. The argument scheme is deductively valid, and, for every true identity sentence p, yields the conclusion that nothing grounds [p]. So the argument quite straightforwardly states that identities, being necessary, cannot be grounded. As before, grounding may not be the only way to make sense of the Knee-Jerk Reactionnot even when the necessity of identity is at stake. For example, Kim (2008: 102) suggests that identities, to the extent in which they are necessaries, are not explainableas nothing "makes it the case" that an identity is the case. Thus, we could expand the (P1)-(C) argument with an additional premise, as follows: (P1) If p is a true identity sentence, it is necessary that p. Under an Unionist conception of grounding and explanationaccording to which grounding is an explanatory relation (e.g., Thompson, 2016;Maurin, 2019) the difference between the two arguments is negligible; but let us focus on the first one, and the grounding of identities proper. 4 The true story is obviously much more complicated than that, as grounding is one of the most discussed notions in contemporary philosophy. For an introduction, see Correia & Schnieder, 2012;Clark & liggins, 2012;Trogdon, 2013a;Bliss & Trogdon, 2014). Although, like us, there are those who take metaphysical grounding to be a heavy-duty relation (usually between facts, as in Rosen, 2010), many subscribe to a deflated notion, usually by taking grounding to be a sentential connective instead (e.g., Dasgupta, 2014Dasgupta, , 2017. We do not hope, in this paper, to weight the toll that metaphysical grounding takes on reality, and we provisionally take metaphysical grounding to relate facts. Thus, we take whatever it is that makes the case that p, to be the ground of [p]where [p] is the fact that p. We take grounding to be a dyadic relation for the time being (e.g., [p] is grounded by [p]). Standardly, metaphysical grounding is a strict partial order (Maurin, 2018), although the irreflexivity of grounding has been questioned in Jenkins (2011), and its transitivity (qua relation between facts) in Schaffer (2012). Some further features of the notion (e.g., its modal status) will be discussed later on in this section. One last thing should be noted: in this paper we will restrict the discussion about the grounding of identities to the discussion about the metaphysical grounding of identitiesalthough we do not presume that all grounding is metaphysical. We leave for another time, an extension of our reconstruction of the broadly Lewisian argument against the grounding of identities which encompasses non-metaphysical grounding. The crucial premise is (P2), according to which all necessary facts are ungrounded. As Cameron (2008: 262) points out "[that] a demand for grounding vanishes when the truth in question is necessary is a familiar thought"; that said, a claim as robust as this should not be accepted without motivation, its popularity notwithstanding. 5 Rather than exploring which conception of metaphysical grounding underpins (P2) in the argument scheme above, we can quickly find counterexamples to it deploying standard grounding notions. For one, it is standardly accepted (e.g., using the "impure logic of ground" in Fine, 2012) that disjunctions are grounded in their true disjuncts, which straightforwardly turns disjunctive tautologies in counterexamples to (P2): for any given sentence p, it is necessary that p or not-p, yet [p ∨ ¬p] is grounded in either [p] or [¬p] (depending on whether p is true or false). And similarly, as in Fine (2016: 6), if p is a necessary truth, [p] likely grounds [p ∧ q], for any q, although p's necessity makes [p∧q] necessary as well. Easy counterexamples aside, it is not difficult to see how (P2) might be at odds with how we usually conceive metaphysical grounding; for one, we conceive it as hyperintensional (in the sense that, for any true sentences p, q, and r, such that q and r are intensionally equivalent, it might be the case that [p] grounds [q] but not that [p] grounds [r]). One of the motivations for this feature is that any weaker conception would fail to detect grounding in the case of necessities, thus implicitly admitting that there are non-trivial differences amongst the grounds of necessities. 6 In conclusion, we shouldn't uphold the Knee-Jerk Reaction as the position according to which identities, as necessary, cannot be grounded, although the necessity of identity might motivate the Knee-Jerk Reaction in another way. 7 We will follow a more closely related thread to that of grounding, however, and wonder whether the problem of identity does not stem from its necessity, but rather from its fundamentality. are identical). If there is a realm of fundamental facts, identities definitely look like they belong there. Our retort to this inviting line of thought is that identity facts are not equi-fundamental, and while we do not deny that some identities are absolutely fundamental, we very much doubt whether all of them are. The reason hides in plain sight: given plausible assumptions, the fundamentality of an identity is not merely determined by the fundamentality of the identity relation, but by the fundamentality of the entities being identified. This is very clear with a structuralist conception of facts in mind, 8 but in whatever metaphysical box identities belong to, it is very much possible that their prospects of grounding depend not entirely on the features of the identity relation (its necessity, or fundamentality), but also on the metaphysical status of the entities being identified. The kind of items picked out by the [...], be they events, facts, or what have you, can on many accounts be thoughts as involving, for lack of a more specific term, not only properties and relations, but the objects that instantiate them; thus, the presence of a somewhat derivative item such as Istanbul in [Istanbul is beautiful] is bound to make [Istanbul is beautiful] itself somewhat derivative; and the presence of Istanbul, or Constantinople, in [Istanbul is identical to Constantinople] is bound to make [Istanbul is identical to Constantinople] somewhat derivative too. More generally, the identities between ontologically dependent, or metaphysically derivative items may be open to grounding in a way that others are not. We may formulate this train of thought with the help of two claims. Firstly, that there are differences in fundamentality between entities of different kinds (maybe entities, like facts, can be ordered with the help of a notion of relative fundamentality): according to this view, philosophically suspicious items such as directions of lines and numbers of concepts (but also boy-bands and parliaments) should not be excised from existence altogether, but be allowed to live as second-class citizens of realityeffectively striking a middle ground between eliminativism and egalitarianism that many will find appealing. 9 Secondly, that the relation of numerical identity is absolute, in the sense that everything (unrestrictedly) is self-identical, be they electrons or boybands. Thus, the world may be layered, yet the same relation of numerical identity obtains across all layers: although the identity relation is the same for everything, the things which are said to be identical may enjoy very different metaphysical status; as a result the ensuing identity facts might enjoy very different metaphysical status, with only some identities being fundamental, while others being provided a grounding base. As hinted before, some identities, more than others, appear to be viable for this kind of treatment: specifically, the identities in the so-called two-level identity criteria (Williamson, 1990: 145-146), in which the identities between entities of a certain kind are characterized through a condition imposed upon entities of another kind; as Lowe (1989: 4) explicitly noted, there seems to be some kind of ontological dependence 8 An argument in this direction may be extrapolated with the help of Sider's (2011: 170-171) so-called "purity", according to which no fact can be more fundamental than any of its components; therefore, identity facts come in varying degrees of fundamentality, depending on the metaphysical status of its component objects. It may be worth noticing that Sider (2011) does not formulate fundamentality in terms of grounding, but joint-carving; and, for him, identity is. 9 For one, Bennett (2011: 28) famously takes the egalitarian option to be "crazypants", and one in regarding to which "every fiber of my being cries out in protest". Whether the position is false, is another matter entirely. Also see Schaffer (2012: 123). relating the two classes of entities (e.g., directions of lines ontologically depend on lines, numbers of concepts ontologically depend on concepts, and so forth…). Although we have no clear account of this relation of ontological dependence between entities, and its relation with metaphysical grounding as discussed so far, this at least paves the way for thinking that some identities can be grounded; to deploy a standard metaphor in grounding literature, after determining that lines a and b were parallel, God didn't need to add anything to the world to make it so that their directions are identical; and similarly, after deciding that forks and knives on a table are in a bijection, God didn't need to add anything else to make it so that the number of forks is identical to the number of knives (for a discussion see Carrara & De Florio, 2018). Not all identity criteria can be reduced to the kind of two-level identity criteria considered above (as discussed in Lowe, 1989Lowe, : 4, 1991, and such, not all identities are provided such a potential grounding base. Potential Grounds for Identities It may at this point be worthwhile to notice that, once we make peace with the idea that identities might be grounded after all, a huge variety of potential grounding bases are immediately available: in this Section we present two, based respectively on indistinguishability and existencewith no pretense of exhaustiveness. The crucial difference between these potential grounding bases and the ones offered in the previous Section, through two-level identity criteria, is that these grounding bases are supposed to work for all identities, as opposed to only some. Firstly, indistinguishability. One may extract grounding principles through a metaphysically robust reading of equivalence principles involving identities (or, weakly still, entailment principles). The more straightforward option is, of course, the problematic Identity of Indiscernibles, as in (at the second-order): which may be read as offering grounding principles of numerical identities between objects in terms of their qualitative identity. The first problem about (II) and the correspondent grounding principles is the well-known problem about the scope of the second-order quantifier: either the bound variable 'F' ranges over all properties, including identity-involving ones that would make (II) trivial, or some kind of restriction is imposed upon the quantifier, at the risk of making (II) false. Yet even if that problem is solved, and the right balance is struck when restricting the quantification over properties (so as to make (II) both true and interesting), the risk exists that a circularity is involved insofar as qualitative identity somehow involves numerical identity. 10 That qualitative identity presupposes numerical identity might be a problem if we wish to define numerical identity along the lines of (II) and its converse (the Indiscernibility of Identicals, as in McGinn, 2000: 7); yet assuming that such a definition is not forthcoming (identity is already given), we must now understand what are its consequences for the grounding of identities instead. Let us consider a toy world in which there only are three monadic properties P, Q, and R, and no relations; now let us consider objects a and b, such that a=b; if (II) can really offer grounding principles, then [a=b] is grounded in the fact that a and b share all the properties. & Rb] can be described as stating that a and b possess the same properties P, Q, and R, they are not identify facts, nor they have identity as parts. What is more, an advantageous peculiarity of taking [a=b] to be grounded in [∀F(Fa ↔ Fb)] is that it neutralizes a well-known problem in the grounding of universal quantification in its instances (as discussed in Fine, 2012: 60-62). The problem is the following: assume another toy-world in which [∀xPx] is jointly grounded by [Pa] and [Pb], being a and b the only things that exist, which are both P; now assume that grounding is necessary in the sense that for every sentences p and q such that [p] is grounded by [q], then ☐(q⟶p). Let us call this thesis grounding necessitarianism, or (N). 12 Given these assumptions, at all worlds in which it is the case that both a and b are P, it should also be the case that everything is P; yet this is patently false. It is easy to conceive of a world in which another entity c exist, which is not P. The idea is that [Pa] and [Pb] are not sufficient to ground the universal fact: one also has to add that a and b are all that exists (perhaps in the form of a totality or a negative fact Fb)]. In fact, consider a qualitatively expanded world with the additional alien property S (which does not actually exist), such that, in that world, a possess S, but not b (or vice versa); in that world, however, a and b share P, Q and R as before. S does not exist in the original toy-world, and thus does not fall in the scope of the quantifier in ∀F(Fa ↔ Fb). In our qualitatively expanded world, [ Interestingly, such considerations cut no ice in our present case, given that we are already assuming that [a=b] is grounded by [∀F(Fa ↔ Fb)]; a and b being identical, and necessarily so, makes it impossible that in this qualitatively expanded world a has S, but not b (by the uncontroversial Indiscernibility of Identicals). Taking an identity [a=b] to be grounded by [∀F(Fa ↔ Fb)] makes it so that even if (N) is true, qualitatively expanded worlds do not provide counterexamples to the idea that [Pa & Pb], [Qa & Qb] 11 Two things to notice. Firstly, we introduced grounding as a binary relation, but perhaps it would be best to consider it as a variably polyadic relation in which one item is grounded by a plurality of items, where the oneto-one case of grounding constitutes a limiting case of it (in the sense that a plurality of one is a limiting case of plurality). Secondly, assuming grounding to be transitive, it makes little difference to take [a=b] [Rb], if in turn we assume that conjunctions are grounded by their conjuncts. 12 Although it is hard to say whether grounding necessitarianism is part of the orthodoxy about metaphysical grounding, it is often assumed. See Trogdon (2013b) and Skiles (2015) for critical discussion. and [Ra & Rb] ground [∀F(Fa ↔ Fb)] -although in such worlds P, Q, and R are not all the properties that exist. 13 Secondly, existence. We might consider the possibility that identities may be grounded by existence facts (e.g., Salmon, 2005: 153;Burgess, 2012: 90). One of the reasons behind the Knee-Jerk Reaction is that identity is, by standard definition, a very undemanding thing to occur: not much is required from Hesperus and Phosphorus for them to be identical. However, something is in fact required of them: it is required that they exist. Thus, or so the idea goes, existence facts may be a suitable grounding base for identities. To employ the same metaphor as above, once God had added things to the world (and thus, their existence), She didn't need to add identities between them: identity comes for free once objects are put into existence, and thus appears particularly suitable to be grounded upon existence facts. Alternatively, the identity between Hesperus and Phosphorus may not be grounded by the fact that Hesperus exists (or, equivalently, the fact that Phosphorus exists), but rather by Hesperus (Phosphorus) itself -viz. the planet Venus-, a suggestion that would force us to reconsider grounding as a trans-categorical relation, which can also related objects, as in Schaffer (2009). A similar consideration can be put forward by those who think that facts need to be composed by properties and relations, yet deny existence the status of (real) property. Another reason to eschew existence talk entirely, as noted in Shumener (2017: 5), is that the most common definition of the existence predicate crucially deploys identity. Here is how the problem may be formulated: assuming the fact [a=a] to be grounded in the fact that a exists, the fact that a exists may take the form [∃x(x=a)]; however, existential quantifications are standardly taken to be grounded in their true instances; thus, [∃x(x=a)] is grounded in [a=a]. Thus, if we start with the assumption that [a=a] needs to be grounded in the fact that exists, the transitivity of grounding dictates that [a=a] is grounded upon itself. Yet if grounding is irreflexive, that is a problematic conclusion. 1415 Concluding Remarks Identity might be utterly simple and unproblematic as Lewis claimed, but given certain assumptions on grounding as a relation of relative fundamentality between facts, it is not so obvious that identity facts are either ungrounded or fundamental. While some potential grounding bases have been offered with the help of indiscernibility and existence facts, we have also highlighted the crucial fact that not all identities are on 13 This case highlights the difference between grounding necessitarianism as in (N), and the stronger thesis that if a (collection of) facts grounds a fact, it necessarily does. Fine (2012: 59-60) offers additional consideration as to why the existence of a (in the sense of [∃x(x=a)], or in a more generic sense) should not be grounded in a's self-identity fact [a=a]. 15 One may argue that, identity being a fundamental relation, the self-grounding of identity facts such as [a=a] may in fact constitute a palatable option; this would of course expand the landscape of potential grounds for identity in a way that violates the irreflexivity of metaphysical grounding. We would like to thank an anonymous reviewer for suggesting this possibility to us. the same boat in this respect: depending on the metaphysical status of the items being identified, some identities are bound to be more fundamental than others, and more derivative identities may have grounds through two-level identity criteria. These results fit more general considerations regarding the status of identities. A popular knee-jerk reaction amongst philosophers regarding identity is that once it is stated that, say, Hesperus and Phosphorus are identical (or equivalently, that Hesperus and Phosphorus are one), there's nothing left to say on the matter of philosophical interest. That said, even accepting that there's hardly any conceptual analysis or definition of identity forthcoming, it remains to be seen whether there is no explanatory or otherwise metaphysical value to entailment principles concerning identities, such as indiscernibility principles or identity criteria. Funding Open access funding provided by Università degli Studi di Padova within the CRUI-CARE Agreement. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
5,576.2
2021-03-01T00:00:00.000
[ "Philosophy" ]
Public Engagement in a Global Pandemic UKRI/STFC’s Scientific Computing Department (SCD) has a long and rich history of delivering face to face public engagement and outreach, both on site and in public places, as part of the wider STFC programme. Due to the global COVID-19 pandemic, SCD was forced to abandon an extensive planned programme of public engagement, alongside altering the day-to-day working methods of the majority of its staff. SCD had to respond rapidly to create a new, remote only, programme for the summer and for the foreseeable future. This was initially an exercise in improvisation, identifying existing activities that could be delivered remotely with minimal changes. As the pandemic went on, SCD also created new resources specifically for a remote audience and adapted existing activities where appropriate, using our evaluation framework to ensure these activities continued to meet the aims of the in-person programme. This paper presents the process through which this was achieved, some of the benefits and challenges of remote engagement and the plans for 2021 and beyond. Introduction The Science and Technology Facilities Council (STFC), part of UK Research and Innovation (UKRI), is responsible for supporting, co-ordinating and promoting research, innovation and skills development in areas ranging from the largest astronomical scales to the tiniest constituents of matter [1]. STFC's Public Engagement vision is of a society that values and participates in scientific endeavour [2] and to achieve help this, Public Engagement is one of STFC's six strategic themes [3]. STFC also has a dedicated public engagement team with managers at each of the three largest sites and its headquarters. "Big Data and Computing" is one of their five key outreach themes [4]. STFC's Scientific Computing Department (SCD) is distributed across two STFC sites and manages high performance computing facilities, services and infrastructure, supporting some of the UK's most advanced scientific facilities. SCD works across a number of large facilities at both the Daresbury Laboratory in Cheshire and the Rutherford Appleton Laboratory in Oxfordshire (such as the UK Worldwide LHC Computing Grid (WLCG) Tier 1 [5] and the JASMIN Super Data Cluster [6]), across a large number of scientific software packages (such as the materials modelling code CASTEP [7] and The HSL Mathematical Software Library [8]) and has close links with other computing centres around the UK, particularly including the Hartree Centre [9,10]. Due to the global COVID-19 pandemic, SCD has pivoted its public engagement programme to be entirely remote for the foreseeable future. This paper presents the process through which this was achieved, the plans for 2021 as well as some benefits and challenges of remote engagement. Public Engagement pre-COVID-19 SCD has a long and rich history of delivering face to face public engagement and outreach, both on site and in public places, as part of the wider STFC programme. One of the unique qualities of STFC's public engagement programme is providing school children and members of the public a chance to experience "big science" for themselves, by participating in a wider visit to one of STFC's high-tech laboratories and facilities [11]. These visits often involve a large number of visitors being hosted by STFC for a day of facility tours, hands-on workshops and talks delivered by our staff, see Fig. 1 and Fig. 2. While this presents a unique and exciting experience for participants, it can limit the reach of STFC's Public Engagement efforts to those who have the means and ability to travel to one of the sites. STFC recognises this and has long had a strategic aim to improve the reach of its public engagement [2] by seeking to do more with audiences geographically remote from an STFC campus or other substantial STEM organisation, as well as increasing the proportion of our activities that reach low science capital [12] audiences. In practice, it proved difficult to devote the necessary time and effort needed to get a significant remote programme off the ground whilst at the same time continuing with the ongoing programme of Public Engagement and its continuous improvement. There was also a reluctance and lack of technical familiarity for remote engagement from STFC's key audience before the pandemic. Despite this, STFC as a whole had recent successes in extending its geographical reach and SCD had started to think about engaging with remote schools before the pandemic, through a collaboration with another STFC site, the Boulby Underground Laboratory. Due to Boulby's location in a working mine, in-person visitors to the laboratory were of necessity rare (and children are not permitted underground for safety reasons), and so Boulby had developed techniques and expertise in remote engagement. Enabling access; Travel bursaries and the Wonder Initiative STFC's Wonder initiative [13] is a long term commitment by STFC to audience-driven public engagement with communities in the most socioeconomically-deprived areas of the UK. Working with communities to understand and overcome barriers to engagement is a core part of STFC's PE mission, and many projects (both before and during the pandemic) work with community groups and other organisations to achieve this end. There has been a shift in some of these barriers during the national lockdowns -for example, before the pandemic, a key barrier was the cost of travelling to one of our laboratory sites. To begin to lower this barrier, STFC has used bursaries to cover the transport costs of schools located in areas of higher socioeconomic deprivation, or with a high proportion of students with free school meals. Understanding the new barriers that exist in a world of remote engagement is an important -and ongoing -part of our public engagement programme. Working with others, Boulby Underground Laboratory and the Remote 3 Project The Remote 3 or "Remote sensing by Remote schools in Remote environments" project is a collaborative effort between the University of Edinburgh, STFC's Boulby Underground Laboratory and STFC's Scientific Computing Department which was initiated in 2019 and funded by an STFC Spark Award, with the aim of delivering muchneeded STEM engagement to some of the most remote areas of Scotland. As the project title suggests, it was created as a remote project. Students from these areas are underserved in provision of STEM engagement compared to their urban counterparts, which is believed to be a contributing factor in low rate of progression to Higher and undergraduate education. The project is aimed at S1-S3 students in 10 Scottish high schools (students ages [11][12][13][14]. Teams of 4-6 students are assembled per school and over a 14-week period they design, build & program a miniature Mars Rover, sharing their progress with their peers, teachers and families. This is then sent to the Boulby Underground Laboratory to explore the STFC Mars Yard, 1.1 km deep underground. This project has been constructed with an awareness of the specific skills and attributes in mind, seeking to inspire innovation and creative design, develop digital skills, encourage team-work, team-building and oral and written presentation skills in a diverse environment, and provide awareness of the remarkable on-going front-line scientific activity taking place in the UK and overseas. To demonstrate the feasibility of the project, a trial project ran during RAL summer coding week in 2019, see Fig. 2 and Fig. 3. Twenty pupils aged from 8-14 were split into teams of 4 and at the end of the week they saw their Mars rovers remotely via zoom at the STFC Mars Yard at Boulby taking data and exploring the terrain. They then presented their findings to their parents and peers as well as STFC senior management. The project began in early 2020, with a launch event with schools in early March -student teams were due to begin their projects as the first national lockdown began. COVID-19 in the UK By the middle of March 2020, the vast majority of SCD staff started to work remotely from home, following the UK government's guidance to work from home if possible [14]. Access to STFC sites was limited to those working on and supporting COVID research. Less than a week later schools were closed with immediate effect [15], switching to remote learning where possible. This was to be the case for the foreseeable future, and as a result STFC's planned summer programme of in person Public Engagement, including a large scale public open week at Daresbury, was cancelled. Since then, the UK public has faced a series of local and national restrictions on unnecessary social contact. At times of national lockdown, the strictest measures enforced to date in the UK, the public was told they must not leave the house except for a limited number of "reasonable" justifications [16]. Remote Public Engagement With the summer programme cancelled, and not knowing when in person Public Engagement events would be feasible again, the SCD PE committee sought to redirect its efforts to devising remote activities and improvising a new Public Engagement programme. Remote PE brings its own challenges, particularly in areas such as safeguarding and evaluation, as well as a simple lack of equipment at home and reaching those who do not have good data connections, or access to a laptop / desktop device -an estimated 9% of families do not have access to a laptop, desktop or tablet at home [17]. Initial Efforts The committee's immediate attention turned to highlighting existing standalone resources, such as the Hour of Code TM [18], a nationwide initiative to introduce millions of students to one hour of computer science and computer programming. The first event SCD held during lockdown was a virtual version of the monthly RAL coding club. Normally a four hour session, the virtual version lasted two hours via video conferencing, starting with introductions and challenge setting. The children then worked on their individual projects, with mentors staying online to help as necessary. At 16:00 everyone was asked to sign back on to do a virtual "show-and-tell". Due to SCD's recurrent interactions and strong established relationships with the cohort, this event was able to make use of video feeds and screen share from its participants. The event helped the committee to gauge the speed at which participants can achieve goals during virtual sessions, their differing attentions spans, and helpful features of Zoom, including chat and reactions. The committee also sought to interact with the wider efforts of the STFC PE and Social media teams, to provide content for a computing themed entry in the STFC's Science at Home Campaign [19]. This led to SCD's first remote activity of the pandemic, a challenge to recreate the Flexipede [20], which was one of the first ever computer-generated animated characters, created by Tony Pritchett in 1967, who had strong links to the ATLAS Computer Laboratory (now RAL). The option to "remix" an existing recreation [21], created especially for this challenge in Scratch, was also given. Subsequent Events Whilst these initial efforts were ongoing, the SCD PE committee worked to develop an effective autumn programme of public engagement events that could be delivered remotely. This was aided by the recent formalisation of the aims of each activity using the "Generic Learning Outcomes" (GLOs) evaluation framework [22], which ensured the new or adapted activities continued to meet the aims of the in-person activity. Remote Data Centre Tours The main GLOs of a face-to-face data centre tour are: understand some of the common terms used in High Performance Computing; be able to give examples of the speed / performance / memory of the Machines; be able to describe what a data centre looks like. A remote data centre tour, see Fig. 4, also seeks to meet these objectives through the use of a detailed SketchUp [23] model of an STFC data centre. To help ensure this, a remote tour is delivered by the same tour guides and in much the same style as a physical tour. A benefit of being remote is that the capacity of the tour is no longer limited by the ratio of non-staff to staff and the number of members of the public who can be in the data centre at the same time. As such, a remote tour is able to reach significantly more people with a reduction in the staff-time taken to deliver the tour. For example, whilst a physical tour would be limited to 6 per trained staff member, up to a maximum of 24, remote tours have been delivered by a single staff member to over 350 people in the same time as a physical tour would have taken place. Fig. 4. A virtual data centre tour in progress. A significant challenge of a remote data centre tour is offering a smooth viewing experience to the viewer in a Zoom webinar format. Whilst the model itself will run relatively smoothly on a typical staff laptop, the tour cannot be shared over Zoom on a typical staff laptop without a significant drop in framerate. The drop in framerate would negatively impact the "be able to describe what a data centre looks like" GLO, as it would hinder the participants' ability to get a sense of how the different parts of the data centre shown relate to one other in physical space. Currently, only one of the approximately 20 data centre tour guides have access to a laptop capable of sharing the tour over Zoom at a reasonable framerate, and that laptop is a personal gaming laptop. Whilst recording a video of the tour might solve this challenge, feedback from these events has indicated that participants feel more welcome in the remote tour because it is not a video, but is being delivered live for them. Participants feeling welcome is itself a key GLO of all our activities. Python Workshop As part of the STFC work experience programme, SCD usually supports the Public Engagement team to run a 3-day, face-to-face, introduction to python workshop at RAL [24] -usually attended by around 15 students. Obviously this was not possible in 2020, and as such the SCD PE committee commissioned a three-month project to rework the existing material [25] into something that could be delivered remotely as a pilot event. This project was funded by SCD and delivered by a computing apprentice. JupyterHub Notebooks hosted on the STFC Cloud [26] was chosen as the new format for the workshop, see Fig. 5, so that participants would not have to make use of any software beyond a web browser (i.e. there was no need for them to install python). The remote version was much shorter than the face-to-face version, being delivered in four, one hour sessions. As such, and due to the technical limitations of being unable to use pygame in a JupyterHub Notebook in a web browser, the remote version focused on the introduction to python. Much like the face-to-face version the remote version was a taught exercise led by a member of staff and supported by 14 other members of staff, who were on hand to help to debug the schoolchildren's code via the chat function. For safeguarding, participant video, audio and screen-sharing was disabled. The remote nature of the event allowed it to reach 78 students, many more than is possible in the face-to-face version due to constraints on available computers and physical space. In the future, the material will be extended to allow it be a standalone resource that anyone can work though in their own time. The Jupyter Notebook platform will soon be delivered as a service by the STFC Cloud, allowing the SCD PE committee to build more engagement activities on top of it, and was successfully used to deliver a remote Particle Physics Masterclass [27] in March 2021, using LHC data. Work Experience In addition to the introduction to python workshop described above, work experience in 2020 took the form of a series of interactive webinars over the course of several months, focussing on different areas of STFC's work, and different career pathways and skills. These were very well received and again needed a smaller staff time input to reach a large number of students. The in-depth, immersive experience of work experience placements at the labs, however, is hard to replicate in a webinar format. Saturday Coding Saturday Coding has continued virtually on a monthly basis throughout the pandemic. The session times have remained at two hours duration, and there are definite challenges of holding these events remotely: it is harder to keep the participants on task, and harder to provide quick and effective support remotely. It has, however, encouraged the participants to develop the important skills of explaining the problems they are facing quickly and clearly, and supporting one another remotely. Ada Lovelace Day In October 2020, SCD hosted a remote version of its popular Ada Lovelace Challenge Day [24]. The event was held over Zoom and started with an introduction to physical and virtual [28] Arduino microcontrollers, as well as block based programming languages (such as Ardublock [29]). The face-to-face version had the GLOs that participants would understand coding and hardware, and the virtual Arduinos allowed the event to continue to meet this aim. As schoolchildren were in the classroom for this event [30], each school was assigned a breakout room and multiple STFC staff members to help debug hardware and software issues. Each school had at least one Zoom video connection into the breakroom, with a teacher and two members of staff always present. Within a school, the schoolchildren were split into teams of 4 and each team started work on designing and building hardware and software based replacements for damaged systems, see Fig. 6, of the Ada Lovelace, the first ship to Mars carrying a crew. To facilitate this event, some schools were sent physical Arduinos and a set of electronic components, as well as an electronic resources pack enabling them to replicate the day in the future on their own. The challenges were reworked to exclude any components that might overheat and cause even minor injury, as trained STFC staff would not be physically present with the components to ensure safety. One of the benefits of its remote nature meant the event was able to reach over 350 schoolchildren, from a more geographically diverse area, instead of the approximately 40 schoolchildren, from the relatively near-by area, that the physical event can cater for. Feedback from both teachers and parents was excellent, with one parent writing to say: "Just wanted to report that my daughter has just got home from school where she has spent all day working remotely with RAL colleagues on some coding challenges. She absolutely loved it and said that she learned tons and all those involved were really nice." There were challenges with the remote event, however. Some schools only had one video connection to the breakout rooms but had multiple classrooms worth of schoolchildren in the event and this caused delays while information was relayed by teachers. Going forward, SCD's remote events with schools will ensure one video connection per classroom. Some schools had technical problems with installing drivers and the Arduino IDE/Ardublock on school hardware, which was necessary to work with the physical Arduinos. Going forward, the leads of an SCD remote event with schools would arrange a pre-event meeting to debug these technical issues before the event itself. Future remote Ada Lovelace Challenge Days are also likely to make more use of the virtual Arduinos, as opposed to physical Arduinos. Remote 3 Although Remote 3 was conceived as a remote project from the start, the national lockdown and school closures necessitated a rapid re-consideration and adaptation of the planned programme. The school teams did not have access to physical Mindstorms, and often did not have reliable internet access at home. The project drew heavily on Boulby's experience and expertise of remote engagement to adapt the programme appropriately The project stayed in touch with the teachers, offering help where needed and working with them, Boulby and the Scientific Computing Department to develop a series of online coding events which captured the core values of Remote 3 , delivered weekly over 10 weeks. Each week introduced a new coding challenge, with participants creating solutions which were then demonstrated at subsequent events. Each event also featured a short talk or tour highlighting a particular aspect of science in remote environments, linking this back to the challenges the participants were undertaken to provide an aspirational context. The initial audience was expanded to more than just the Scottish schools, but also others around RAL and Boulby, with over 150 participants. Sharing good practice and lessons learned during the pandemic, the Remote 3 project was highlighted at a SUPA (Scottish Universities Physics Alliance [31]) showcase event. The project website was launched during national coding week and took part in the Big Bounce Festival, organised by the Institute of Physics, during which a coding workshop over a two day period was delivered. In addition a Remote 3 short story was developed for a much younger age group, which was delivered with great success to a wide range of audiences: from children to British Embassy staff in China, where it was a big hit. A series of curriculum-linked short activities which schools and uniform groups can undertake independently is also being worked on. Glow Your Own coding workshops In partnership with IF Oxford, RAL's local Science and Ideas Festival, and Fusion Arts (an Oxford-based charity that devises and delivers creative projects in the local and wider community), SCD supported and led a series of coding workshops using Arduinos, culminating in participants creating their own coded Christmas Lanterns. SCD provided support to enable IF Oxford to supply the community with physical Arduinos, removing the price barrier for those living in areas of higher socio-economic deprivation in Oxford. In addition to this, the link with the arts resulted in a high proportion of participants who had never coded before. The feedback was again extremely positive: "I'm astounded by the ambition you have with these series of workshops and how brilliantly well are explaining everything and making it all work for everyone. It's complex stuff, new to us and yet, here we all are, making it all work, with your help. Brilliant teamwork, great patience and dealing with the video and chat, all at the same time. BIG THANK YOU!!" The pacing of these workshops was very difficult to judge over Zoom, with a wide range of participants, but the project was so successful that it has been revived to take forward over the coming year. PE Induction for staff After running SCD's first PE induction in 2019 [32], a second workshop was delivered in October 2020 in a remote fashion. This time, it focused on the new remote activities developed earlier in the year, the JupyterHub based work experience workshop and the remote Arduino activity. The participants were introduced to common problems faced by students of both activities and how to debug and solve them. Following feedback from the previous induction, this time the workshop was open to everyone in SCD, not just early career staff. Evaluating the workshops effectiveness and planning next steps is very important: to this end, the participants were asked to rate the induction as well as suggest improvements and follow up training. The delivery of this event benefited from the lessons learnt from the python workshop and Ada Lovelace Challenge Day, and many of the issues faced in those events were not repeated. Due to its virtual nature the number of breaks in the session was increased to avoid zoom fatigue, and as such the number of topics covered was reduced accordingly. Benefits Fortunately, in many ways computing lends itself to remote delivery more easily than other subjects might and switching to a purely remote programme of public engagement has brought benefits to SCD's public engagement efforts. One benefit is that the programme is no longer limited by the physical constraints of an STFC site, nor limited by geography to those who have the means and ability to travel to an STFC site. Without the cost of SCD staff going into schools to deliver engagement, the SCD PE Committee has been able to send resources, such as Arduinos, to schools so that they can participate in events remotely. These resource then remain at the school, allowing teachers to redeliver the activity themselves in future years. Given that the majority of staff within SCD, and STFC as a whole, are now working remotely from home, the pandemic has also encouraged more interaction between the public engagement efforts of STFC two largest sites, the Rutherford Appleton and Daresbury Laboratories. The pandemic has also acted as a catalyst for the SCD PE Committee to adapt existing resources and develop new resources such that they can be delivered remotely. Whilst these resources do fill a short term need for remote public engagement, they also help achieve the longer term aims of reaching a more geographically diverse audience. When in-person public engagement returns, given the clear benefits of remote engagement, the programme is not foreseen to return to being purely in-person. These remote activities will continue to be a part of a blended in-person/remote hybrid programme of Public Engagement, though it is not yet clear what proportion of activities will be remote going forward. Over the coming months we will assess which types of activity benefit most from being delivered remotely, as opposed to in-person delivery, and will adjust the balance of the programme accordingly Times of national lockdown and school closures are causes of stress for the public, teachers and parents. With people confined, the opportunity to visit places like STFC's big facilities virtually is welcome. With schools closed, the fact STFC can reach out with resources is perhaps more welcome The ubiquity of video conferencing in the public's everyday life as a result of the pandemic has made remote public engagement easier. Before the pandemic, it was easier for teachers and schoolchildren to come onto site then it was to arrange a video call on a suitable, well-used and well-understood platform. The pandemic resulted in video conferencing software becoming much more of a necessity for organisations (as an example STFC did not have the technical capability to host large scale webinars before the pandemic -the necessary license for which was only acquired after mass working from home began). The software itself has also become much more feature-full in the areas of privacy and security, and people are far more confident now using this kind of software. Challenges Remote Public Engagement brings its own set of challenges. Some of these challenges are a result of the uncertainty of life in a global pandemic, however some are specifically challenges for remote engagement. When schoolchildren started to return to the classroom in summer 2020, STFC as a whole saw less of a demand for the public engagement programme as teachers had to focus on catch up on progress through the curriculum, which had been lost during the first lockdown,. This disproportionately affected disadvantaged pupils [33], who tend to have lower science capital and as such are one of STFC's key PE audiences. Planning remote public engagement events against the backdrop of uncertainty, not knowing whether the majority of schoolchildren will be in classrooms or home-learning, is also more challenging. As SCD and STFC plan their public engagement programmes for 2021, plans must be made not only for remote events, but also remote events that will work well regardless of whether the schoolchildren are physically together in a classroom, learning from home, or a mix of the two. Evaluating remote public engagement is inherently a different problem to evaluating in-person engagement, regardless of a pandemic. STFC collects evaluation data from all its events. When these events were in-person, individual participants would often evaluate the event as the event was ending, while still on site. This cannot be enforced in a remote setting. Participants with a negative view of an event may leave early, before any evaluation takes place, and those who do provide feedback after an event has ended may be those who have extreme views of the event. In a remote setting evaluation of events is also reduced to per zoom connection, which is typically per household or classroom. With the loss of this fine grained information, often a consensus view of the event is reported and it is more difficult to get accurate demographic information, such as the number of girls engaged, which is a key STFC demographic [2]. As such, evaluation may skew to the positive and it is difficult to make direct comparisons of how the programme has changed due to the shift to remote engagement. The sudden shift in the programme from in-person to remote engagement also makes comparisons of the evaluation data from this year to previous years difficult. SCD was set to commission a project to evaluate its in person programme, to ensure it was meeting its aims and to improve the programme as a whole. Remote public engagement also has its own challenges to safeguarding. With face to face engagement, it is often trivial to ensure a single member of staff is never left alone with a schoolchild in a private space by simply avoiding areas with little to no footfall. However, with remote public engagement, every Zoom call is essentially a private place. As such STFC took technical and procedural steps to ensure safeguarding requirements were still met. These included: disabling participant video and screen sharing for events such as the python workshop; and ensuring any breakout room has at least two members of staff in them at all times. Getting new people involved with the STFC public engagement programme has also suffered during lockdown, though SCD's computing activities do have higher proportion of new people involved due to its culture of PE [32] and the virtual PE induction. During the pandemic it can be difficult for some staff to work effectively from home, due to childcare, wellbeing or other concerns, let alone deliver remote public engagement. In the future, staff who have found it difficult to work from home will likely return to site -which may reduce some of the barriers that are currently stopping them getting involved in PE. Conversely, more staff may start working mostly from home, which may place new barriers on them getting involved with in person PE. The timing of any future PE training will have to take into account these new working patterns, perhaps co-locating the training with other events that require staff to attend a STFC site. Also, remote events tend to require fewer members of staff to deliver and, with the added difficulty of delivering online, the temptation is to fall back to those with experience of delivering the session rather than on new members of staff. There are also technical limitations. A virtual tour is computationally expensive to run, limiting who can deliver it to those with high end gaming machines, which are typically personal devices. Future The SCD PE Committee is aware that many of the new activities created require a level of access to the internet and technology that not everyone in the UK will have, especially at home. This "digital divide" [34] still limits the reach of the engagement SCD conducts. As the programme for 2021 takes shape, SCD is keen to lower as many technological barriers as possible. This could be through reducing the number of screens/devices needed for optimum participant in events, sending the necessary hardware to participants, or removing the need for a computer/internet connection entirely. The STFC Public Engagement Team is currently working on the development of a data logger project, where hardware (Raspberry Pis and sensors) will be provided to families through (e.g.) libraries and community groups. STFC is also working with our local Computing at School to improve accessibility for remote events. One way to remove the need for a computer/internet connection entirely would be to leverage the paper based "Unplugged Computing" activities recently created at CERN, such as the "Introduction to Programming" activity (where a grid, maze and arrow cards are used by children to navigate a robot between two squares) and the "What's Inside a Computer" activity (where participants learn about some core components of computers) [35]. Indeed, SCD had started to look into how these could be incorporated into the Daresbury Open Week before it was cancelled. Whilst 2020 was an exercise in improvisation, 2021 has been planned from the outset to be a year of mostly remote public engagement. The SCD PE committee will build on the lessons learnt from the initial events discussed in Section 4 to deliver a programme of engagement and endeavour to ensure the schools programme will work well regardless of whether schoolchildren are in classrooms, at home, or a mix of both. Conclusions Since the start of the pandemic, remote Public Engagement has been the norm for SCD and STFC as a whole. SCD was forced to abandon an extensive planned programme, including the Daresbury Open Week (into which significant effort had already been put), alongside altering the day-to-day working methods of the majority of its staff. The year was initially an exercise in improvisation with existing activities quickly identified that could be delivered remotely with minimal changes. During this time, SCD also created new resources specifically for a remote audience and adapted existing activities where appropriate. Throughout the process, SCD has relied on the STFC Evaluation Framework and the concept of "Generic Learning Outcomes" to ensure the new remote events continue to meet the same objectives as the face-to-face portfolio. As a result SCD now has a portfolio of remote activities and, whilst remote public engagement brings both benefits and challenges, SCD firmly believes the remote activities developed and those that will be developed during the pandemic will remain a key part of the public engagement programme when staff and visitors are able to return to the sites. This all contributes to STFC's long held strategic aim to extend the reach of its public engagement to audiences geographically remote from an STFC site. Moving forward, SCD will learn from the pilot events this year to deliver an improved programme in 2021, which is expected to be another year of remote engagement.
7,942.2
2021-01-01T00:00:00.000
[ "Environmental Science", "Computer Science" ]
Observer-Based Time-Variant Spacing Policy for a Platoon of Non-Holonomic Mobile Robots This paper presents a navigation strategy for a platoon of n non-holonomic mobile robots with a time-varying spacing policy between each pair of successive robots at the platoon, such that a safe trailing distance is maintained at any speed, avoiding the robots getting too close to each other. It is intended that all the vehicles in the formation follow the trajectory described by the leader robot, which is generated by bounded input velocities. To establish a chain formation among the vehicles, it is required that, for each pair of successive vehicles, the (i+1)-th one follows the trajectory executed by the former i-th one, with a delay of τ(t) units of time. An observer is proposed to estimate the trajectory, velocities, and positions of the i-th vehicle, delayed τ(t) units of time, consequently generating the desired path for the (i+1)-th vehicle, avoiding numerical approximations of the velocities, rendering robustness against noise and corrupted or missing data as well as to external disturbances. Besides the time-varying gap, a constant-time gap is used to get a secure trailing distance between each two successive robots. The presented platoon formation strategy is analyzed and proven by using Lyapunov theory, concluding asymptotic convergence for the posture tracking between the (i+1)-th robot and the virtual reference provided by the observer that corresponds to the i-th robot. The strategy is evaluated by numerical simulations and real-time experiments. Introduction There are several mobile robot applications that take advantage of platooning strategies to improve performance or because of safety issues. Either at street vehicles or small mobile robot applications, a platoon is formed by a leading vehicle and a known group of follower vehicles that may not be aware of all the members that make up the squad, or all the information that comes from them, but usually each robot has information only from its predecessor. The control of vehicle platoons has led to several approaches to deal with this problem, highlighting the follower vehicle scheme and the cooperative adaptive cruise control (CACC) [1,2]. Although, the control actions for a dynamic model of a vehicle (acceleration and steering) are different from a kinematic perspective (linear and rotational velocities), the strategies to render platoon formation are basically the same, and can be extrapolated to any type of mobile robot. The main goal is to ensure that the vehicles form a chain, maintaining a separation between them, dictated by a spacing policy, either distance or time based [3][4][5][6][7][8][9][10]. In addition, string stability has to be guaranteed by assuring the attenuation of the effects of disturbances, caused by initial conditions, speed variations, or external disturbances, along the vehicle string [5,[11][12][13]. As mentioned, platoon formation can be seen as a leader-follower problem considering pairs of successive vehicles at the chain, such as the framework presented in [14], where the leader-follower formation is converted into a trajectory tracking control problem, with the aim to keep a desired constant distance and a bearing angle between the follower and leader robots; the proposed control strategy considers both, the kinematic and dynamic models of a differential mobile robot, particularly of the TurtleBot. Simulation and experimental results show tracking of a desired trajectory for a triangle formation, however, it is observed that keeping a distance and bearing angle between leader and follower robots prevents the follower from performing exactly the same path of the leader, which is more obvious when moving into a curve, since the follower would develop a parallel curve with respect to the leader, or even an over steer or under steer curve depending on whether the follower is in the inner or outer side of the curve. Similar results are shown in [15], where a dispersed structure is forced as a formation to a group of non-holonomic mobile robots, such virtual structure is kept by defining relative distances and angles between the robots in the group. The aforementioned works are a type of constant spacing policy since they enforced constant distance and bearing angles between the robots in the formation. Other works based on the constant spacing policy are for instance [3,4,[16][17][18][19]. As mentioned before, among vehicle platooning, there is a large number of works devoted to solving the longitudinal control of vehicles, without considering the lateral control [18,20,21], furthermore, either a punctual mass system or double integrator dynamics is commonly assumed, such as in [17,19,20], where platoon control is proposed to ensure a prescribed performance. Several works under a time spacing scheme have been carried out in the past few years, see for instance [3][4][5]12,20,22]. A constant distance and time spacing policy are considered in [3] for platoons of differential drive mobile robots by using odometry information. In [12], a constant headway time is designed to obtain a graceful degradation of one-vehicle look-ahead CACC based on estimating the preceding vehicle's acceleration. A delay-based spacing policy has been designed in [5] for the control of vehicle platoons considering disturbances and the string stability of the approximated model of the vehicles. Another delay-based spacing policy can be seen in [23,24], where an approach based on an inputdelay observer to obtain a fixed time-gap separation is used for the differential drive of a mobile robot platoon. It has been pointed out that a time spacing policy improves traffic efficiency compared to constant distance separations [12], because if the platoon moves at a slow speed, it will not be necessary to maintain a large distance between vehicles, which makes the string inefficient. For this reason, the use of variable separations as a function of the speed of the platoon is a method to improve performance. In a time spacing policy, most approaches consider that the velocity of the platoon should not approach zero, since if it is reduced, the vehicle separation will also approach zero and there could be a collision between the vehicles. For this reason, the present paper proposes an extension of the work developed in [23,24] where a fixed time-delay spacing policy is proposed, broadening the work to a time-varying spacing policy inspired by artificial potential fields, which ensures a safe distance between the members of the string, avoiding robots from getting too close to each other. Under these conditions, the work considers a squad of non-holonomic mobile robots, where each robot is intended to follow the position and orientation of its precedent vehicle in the formation, while maintaining a time-varying gap that avoids collisions when the robots approach each other. To accomplish this task, the (i + 1)-th robot will follow the path of the i-th robot, delayed by a time-varying gap, so an input-delay observer that estimates the trajectory to be executed by each (i + 1)-th robot is proposed, rendering robustness to the platoon against external disturbances. The presented scheme is proven by Lyapunov stability theory. The performance of the strategy was evaluated by numerical simulations and real-time experiments. The rest of the document continues in Section 2, presenting the problem formulation associated with a set of n mobile robots type (2,0), represented by kinematic models. Further, the proposal of the time-varying spacing policy is analyzed. In Section 3, the design of an input-delayed observer to generate desired trajectories is presented, and in Section 4, the proposed navigation strategy is described in detail. Section 5 is devoted to the evaluation of the presented time-varying spacing policy through numerical evaluation and real-time experiments. A discussion of the performance of the proposed strategy based on the numerical simulation and experimental results is presented in Section 6. Finally, Section 7 presents the conclusions of the work. Problem Formulation Take into account a set of n differential driven mobile robots that have two actuated wheels and move on the X − Y plane, as the one shown in Figure 1. The position, at time t, of the point Q, located at the midpoint of the robot's wheels axis with respect to the global coordinate frame O xy is denoted by the coordinates x(t) and y(t), while the orientation is denoted by θ(t). The kinematic model of this robot can be obtained by the geometric interrelations shown in Figure 1 [25]; obtaining for the i-th robot, T is the state vector of the system and u i (t) = [ν i (t), ω i (t)] T are the input signals, in which, ν i (t) is the linear or translational velocity and ω i (t) is the angular velocity, with i = 1, 2, 3, · · · , n. It is assumed that the robot is a rigid mechanism that ideally moves on a flat surface, without friction, and must move only by the velocities exerted by the wheels, the vertical axes of the wheels are perpendicular to the ground where it moves, and the non-holonomic constraint, is satisfied at all times. Figure 1. Differential driven mobile robot. Platoon Formation For the set of n wheeled mobile robots, a platoon formation problem is taken into consideration in which the first robot (i = 1, leader of the formation), performs any trajectory produced by bounded input velocities ν i (t), ω i (t), this is, only the leader knows the path to be performed by the platoon. Note that in the case of a vehicle car platoon, this assumption is satisfied by considering that the leader vehicle is driven by a human operator, who sets the traveling route. The platoon consists of a string formation topology, where the (i + 1)-th robot, for i = 1, · · · , n − 1, will perform the trajectory of the i-th robot that is delayed by a specific time-varying gap τ i (t), to maintain a time-varying spacing policy between successive robots. To obtain the delayed trajectory of the i-th robot in the formation, an input-delay observer is designed based on the position measurements of the i-th robot. The considered time-varying strategy is shown in Figure 2, where a convoy of three robots and two observers is presented. It is intended that the (i + 1)-th robot tracks the path provided by the i-th observer, for i = 1, · · · , n − 1, maintaining the formation, while avoiding getting too close to each other. Spacing Policy When considering a time spacing policy in a string formation, the platoon performance is improved with respect to fixed distance spacing policies [12], since in a time-gap strategy, the distance between any pair of successive robots is varying depending on the velocity of the members of the chain, decreasing for slow velocities and increasing otherwise. This strategy is intuitively applied by most human drivers when the speed of the vehicles decreases, for instance when vehicles approach a pedestrian crossing line or a red traffic light, where it is not necessary to keep a large inter-vehicle distance. Notice that in a fixed time-gap strategy reducing the traveling velocity may produce a collision situation, since the inter-vehicle distance also decreases. To avoid such a collision scenario, in this work, a time-varying spacing policy, inversely proportional to the distance between vehicles, is proposed based on the fixed-time spacing policy presented in [24]. It is considered a time-varying gap τ i (t) that increases its magnitude when the distance between vehicles reaches a threshold, which implies that the velocity of the formation is too slow. For this time-varying gap, the following assumption is taken into account. The time-varying gap is proposed as, where τ s i > 0 is a fixed time-gap, which keeps a distance among each pair of successive vehicles at the platoon, which also varies according to the velocity; τ c i (t) is a time-varying gap that increases when the i-th and the (i + 1)-th robot approximate each other, rendering a safe distance. That is, τ c i (t) adds a varying distance to the one induced by τ s i > 0, and this varying distance is activated when slow velocities render a separation distance smaller than a desired magnitude. The time-varying gap τ i (t) should be a bounded non-negative differentiable function, whose value tends to increase as the (i + 1)-th robot approaches to the i-th robot. Inspired by [26], the time-varying component τ c i (t) is proposed as, wherer c i and r c i are positive, non-zero constants that determine the zone where τ c i (t) is defined, around the (i + 1)-th robot, see Figure 3. The parameter α i is a constant of proportionality and d i (t) is the Euclidean distance between the position of the i-th and (i + 1)-th robot, this is, Using the Euclidean distance is conservative with respect to the distance between two successive vehicles along the path, nevertheless is rather easier to be determined by using onboard sensors, as well as by transmitted positions between the vehicles. Meanwhile, safe trailing distance and assured clear distance ahead (ACDA) are implicitly satisfied when considering standard distance between vehicles. Note that the definition of τ ci (t), implies that it is bounded by below by zero. Thus, when the distance between vehicles is bigger than the desired safe distance, d i (t) > r c i , the separation distance is the only function of the constant-time gap τ s i . Otherwise, when the separation distance between vehicles becomes smaller than the safe distance, the timevarying component τ c i (t) tends to increase its magnitude to produce the desired safe distance. As a matter of fact, the behavior of the time-varying component τ c i (t) increases and decreases the distance separation gap generated by the positive limits of d i (t), this is, The time derivative of the time-varying gap is given byτ . Notice that the time-varying gap τ i (t) will increase its magnitude affecting the behavior of the (i + 1)-th robot, only when the i-th robot gets inside the influence zone D(r c i ) defined over the mobile robot frame X m i − Y m i as, of the (i + 1)-th robot, avoiding collisions between them, as depicted in Figure 3. The values of r c i andr c i of the constant time-gap τ s i must be chosen according to the size of the robot and the desired working conditions, particularly according to the desired safe distance, and the maximum rejection action, settled by the inner radior c i , which for zero value would imply an infinite action. (3), is a bounded function independently of the position of the i-th and (i + 1)-th robots, this is, τ i (t) will be bounded since, where the upper bound of τ ci (t) could increase to infinity in the case thatr c i tends to zero. Notice also that by the definition of τ ci (t), from (6),τ i (t) will be also a bounded function. Therefore, the time-varying gap τ i (t) and its time derivativeτ i (t) are bounded ∀t > 0, in its region of definition, this is, sup{τ i (t)} ≤τ i and sup{τ i (t)} ≤ µ i withτ i , µ i ∈ R + . Remark 2. To determine the parameters involved in the influence zone D(r c i ), the following simple heuristic steps are proposed: (i) Determiner c i depending on the physical structure of the robots. (ii) Propose the size of the influence zone by setting r c i such thatr c i ≤ r c i . (iii) Determine α i such that the increment produced on τ i (t) avoids a possible collision between involved successive robots. This parameter depends on the possible velocity of the robots. Input-Delayed Observer With the aim of estimating the delayed trajectory accomplished by the i-th robot, an input-delay observer is designed to provide the delayed position, orientation, and velocities of the i-th robot based on current measurements. The following assumptions are taken into account. Associated to the i-th robot, it is possible to define a time-varying function, (8) and the set of τ i (t) units of time delayed variables, where it is assumed that t > τ i (t). Considering the kinematic model of the i-th robot (1), the time derivatives of the delayed states (9) are, The dynamics of the i-th mobile robot (1) considering a τ i (t) units of time delay, can be rewritten as,ẇ (12) where λ 1 i , λ 2 i ∈ R + and e w 1i (t), e w 2i (t) y e w 3i (t) are the observation errors defined as, Note that the observation errors e w ji (t) depend on the delayed variable w j i (t) = x i (t − τ i (t)) that has to be injected to the observer. Due to the fact that the i-th robot state (1) can be measured (Assumption 1), the injection error in (12) is obtained by storing a segment of the trajectory of the i-th robot (1). Convergence of the Observation Errors In order to facilitate the analysis of the convergence properties of the observation errors (13), instead of considering the original inertial representation of the observer (12), the body frame representation is obtained by means of a rotation of the observation error in the form,  this is, Taking the time derivative of the observation errors (15), it is obtained, The convergence properties of the observation errors are formally presented in the following lemma. Lemma 1. Consider that the i-th robot satisfies Assumptions 1 and 2. Then, if λ 1 i , λ 2 i > 0, the statesŵ j i and their time derivativesẇ j i , for j = 1, 2, 3, given by the observer (12), present exponentially convergence to the trajectory of the i-th robot delayed τ i (t) units of time. Proof. Suppose that leader of the platoon, robot (i = 1), satisfies Assumption 1 and 2, and assume that the second robot (i = 2) follows the delayed trajectory of the leader robot, provided by the state of the time-varying delayed observer (12). If robot 2 follows the delayed trajectory of robot 1, it is evident that robot 2 must have a set of bounded inputs that allows it to follow the desired trajectory. Assuming that the preceding arguments are valid until the (i − 1)-th robot, it is possible to consider that for the i-th robot, taking into consideration that for λ 2 i > 0, exponentially converges to zero, then, the proof is reduced to demonstrate the convergence of errors 1 i (t) and 2 i (t). it is possible to write,˙¯ where, Notice that by Assumption 1, we obtain, is a fading exogenous signal for system (19) and tends to zero as 3 i (t) approaches the origin, despite the evolution of 1 i (t) and 2 i (t). Then, errors 1 i (t) and 2 i (t) converge to the origin according to the evolution of the disturbance-free system,˙¯ For the time-varying system (20) consider a candidate Lyapunov function given as, The time derivative of (21) produces, that establishes global exponential stability of the errors 1 i (t) and 2 i (t). Remark 3. Note that the provided observer (12) exponentially converge depending on the values λ ji , no matter what time delay is considered. Further, note that the convergence of the observer state to the delayed state of the i-th robot renders the delayed values of the desired velocities, that all together, provide the desired delayed path for the corresponding (i + 1)-th robot. Remark 4. It should be pointed out that observer (12) prevents using an approximate estimation of the velocities of the i-th robot, for example, by means of the so-called dirty derivative, that could be an obstacle for the observer-based closed loop stability analysis, which is presented in the next section. Furthermore, the use of the observer does not require storing a large amount of data, while rendering robustness against noise and corrupted or missing data as the observer acts as a natural filter. Remark 5. Notice that Assumption 2 corresponds to a physical constraint for the leader robot in the formation, which allows having a feasible trajectory for the entire platoon that will satisfy the non-holonomic constraint, (2), as soon as convergence of the observation errors is achieved. Navigation Strategy The platoon consists of a set of n robots that travel in chain formation as shown in Figure 2. To get the convergence of the state of the (i + 1)-th robot to the state of the i-th observer, without loss of generality, it is assumed that the leader robot in the formation (i = 1) is always in motion, this is stated at the following assumption. Remark 6. Related to Assumption 3, notice that the case ν 1 (t) = ω 1 (t) = 0, force the overall formation to be in a standstill, a situation that is not relevant from a control formation point of view. On the other hand, the case ν 1 (t) = 0, ω 1 (t) = 0 produces a motion in a straight line of the formation, and the case ν 1 (t) = 0, ω 1 (t) = 0 produces a rotation of the leader robot at a point; thus, the desired chain formation does not make sense. For the navigation strategy, we take into account the kinematic model of the (i + 1)-th robotẋ As mentioned earlier, the (i + 1)-th robot will track the trajectory, delayed τ i (t) units of time, of its precedent i-th robot that will be estimated by means of the i-th observer (12). Initially, we define the tracking errors between the state of the (i + 1)-th robot and the i-th observer (12), as To facilitate the analysis of the tracking errors, a change of coordinates in the form of (14) is considered, i.e., The time derivative of the errors (25) produces, For the tracking error system (26) we now consider the virtual inputs, that in closed-loop produces the new representation, It is possible now, inspired by [25,27], to propose the nonlinear feedback, Taking into account Equations (27) and (29), the actual feedback that solves the stabilization problem for the tracking errors (26) is written in the form, The closed-loop system (26)-(30) is obtained as, where, Tracking Errors Convergence The convergence properties of the navigation strategy are formally presented in the following lemma. Proof. To see the convergence of the tracking errors in (31), notice first that the time-varying term ψ i (t) only depends on the observation errors e w 1i (t) and e w 2i (t) whose convergence was already proven, therefore ψ i (t) → 0 as e w ji (t) → 0 independently of the evolution of the tracking errors e j i , and therefore, it can be considered as a fading exogenous signal. Because of this, the stability of the closed-loop system (31) can be established by analyzing the perturbation-free system (31) obtained by considering ψ i (t) = 0. To show the stability of system (31) with ψ i (t) = 0, consider the candidate Lyapunov function, with a time derivative given by, SinceV(e j i ) ≤ 0, the system is stable. If Assumption 2 is satisfied, along the system solution, e j i and thusė j i are bounded; this implies that the time derivative ofV(e j i ) is bounded and therefore it is uniformly continuous. Invoking Barbalat's Lemma [28],V(e j i ) −→ 0. That is, e 1 i (t) and e 3 i (t) converge to zero. From system (31) with ψ i (t) = 0, it is clear that, because of the convergence of e 1 i (t) and e 3 i (t). Notice that, sin e 3 i (t) e 3 i (t) = 1 and sinceφ i (t) = 1 −τ i (t) can not be zero with a bounded τ i (t); therefore, if Assumption 3 is satisfied, e 2 i (t) will converge to the origin. Remark 7. It should be pointed out that in spite of the time-varying delay τ c i , the tracking errors e ji always converge, and therefore the time delay τ i converge to a constant value inside the region r c i ≤ d i (t) ≤ r c i . Outside this region, τ i is constant as desired. Navigation Strategy Evaluation The performance of the formation scheme presented in this paper was evaluated through numerical simulations and real-time experiments carried out by considering a Lemniscate-type path, as well as an oval track racing circuit for a group of four vehicles. For the sake of comparison, for the Lemniscate path, simulation and experimental results are presented, showing good correspondence with the obtained results. Meanwhile, for the oval track and due to a lack of space, only experimental results are shown. Nevertheless, since such track was intended to evaluate the performance of the controller when the angular velocity is zero and for the case of reference velocities discontinuities, the experimental results allow concluding the robustness of the proposed controller. Lemniscate-Type Path Since the first robot of the platoon can follow any trajectory produced by the action of bounded input velocities, to generate a specific trajectory to the formation, it will be considered a path obtained by input velocity signals defined in the form, where x re f (t) and y re f (t) correspond to the X − Y coordinates evolution of the desired reference path. To generate a Lemniscate-type trajectory, the leader robot is fed with the linear (ν 1 ) and angular (ω 1 ) velocities given by (36) where it is considered, with a = 0.8, b = 0.6 and p = 2π 50 . This path involves orientation and velocity changes that influence the relative distance between the robots, allowing us to evaluate the effectiveness of the time-varying spacing policy presented in this paper. For all the experiments and simulations, unless otherwise specified, the spacing policy (3) and (4) considers the parameters specified in Table 1 for i = 1, 2, 3. α i r c i [m]r c i [m] t s i [m] Robot (i + 1) 0.6 0.7 0.1 6 Numerical Evaluation in a Simulation Frame (NE) To carry out the numerical evaluation, the gains used by the observers (12) were λ 1 i = λ 2 i = 0.5 and λ 3 i = 0.3 for i = 1, 2, 3, and the gains for the control laws (30) of the follower robots were set to k 1 i = 0.4, k 2 i = 0.1, k 3 i = 0.25 for i = 2, 3, 4. The initial conditions of the mobile robots (i = 1, 2, 3, 4) and the delayed observers (i = 1, 2, 3) are shown in Table 2. For the spacing policy (3) and (4), to show the difference between the time-varying strategy developed in this work and the previous fixed-time gap strategy [23,24], for the second robot i = 2 in the formation, it is assumed that for all t, τ c i = 0 and t s 1 = 2[s], thus, it performs as with a constant time gap. To numerically show the robustness properties of the observer, two different disturbances were introduced to Robot 2 in the formation. The first one, at t = 40 s considers a change on the position of the coordinate x 2 (t) from its actual value to x 2 (t) = −0.2 m, emulating an instantaneous displacement on the robot position. The second disturbance considers a failure on the measurement of x 2 (t) adding 0.1 m to its actual value for the period of time 80 ≤ t ≤ 81. Figure 4 shows the time evolution of the set of robots in the X-Y plane, where the effects of the introduced disturbances are clearly shown. It is evident how the time convergence to the delayed leader trajectory depends on the order of the mobile robot on the chain formation, it is also observed how the effects of the considered disturbance in Robot 2 are diminished for the remaining robots along the chain. As expected by the involved delays, the observations errors e w 1 i (t), e w 2 i (t), e w 3 i (t) displayed in Figure 5 converge, which also depend on the order of the robots in the formation and the considered disturbances are adequately filtered. The convergence of the tracking errors e s1 i (t), e s2 i (t), e s3 i (t) is shown in Figure 6, where it is evident that they converge, once the observation errors have reached the origin, as expected by the preceding developments. The introduced disturbance affecting Robot 2 are appropriately diminished for Robots 3 and 4. The evolution of the time-varying gap τ i (t) is shown in Figure 7, while the relative distances d i (t) obtained from the actual position of each pair of successive robots on the formation are presented in Figure 8. Finally, the control signals ν i (t), ω i (t) for each robot are depicted in Figure 9. In the above numerical evaluation, the considered gains were chosen according to the velocity characteristics of the physical mobile robots used on the real-time experiments presented in the next subsection. To show a better convergence of the tracking errors, and the effects of tuning gains, the gains were set as λ 1i = λ 2i = 2 for i = 1, 2, 3 for the observers and k 11 = 8, k 21 = 10, k 31 = 3 and k 1i = 8, k 2i = 10, k 3i = 3 for i = 2, 3 for the control law. For the new set of gains, the trajectories of the robots on the X − Y plane are shown in Figure 10 for the perturbed case, and in Figure 11 for a disturbance-free experiment. Real-Time Experiment (RTE) In order to test the proposed control law, an experimental platform was used to perform two real-time experiments. This platform consists of three differentially driven mobile robots TurtleBot3 type Burger, equipped with a Raspberry Pi Model B and wireless communication, with one virtual robot used as a leader in the formation. The physical robots were used as followers. As mentioned in the control strategy, the virtual robot, under bounded input velocities, provides the trajectory that the follower robots should track. The physical robots have four passive markers (reflective) that are used to obtain their geometrical centroid and in this way compute its position and orientation by means of the localization system. The vision-based indoor localization system consists of 12 Flex-13 cameras with an image resolution of 1280 × 1024, and 120 frames per second (FPS), it was assembled on the roof and produces a vision-working area of 20 m 2 ; these cameras have a LED IR ring and a image sensor, the IR light is emitted and the passive markers reflect this light to the camera image sensor, in this form, the position and orientation was obtained for all the robots by using the software Motive; similar to emulate a GPS localization system. This information was sent to a personal computer where the data are used to obtain the delayed time-varying observers and the control law signals for all the robots; when all the control laws were obtained, the signals were sent by wireless communication through a VRPN (virtual reality peripheral network) and software ROS (robot operating system) that serves as a link between robots and devices. To consider a set of four robots in the chain formation, as in the case of the numerical study (NE) presented in Section 5.1, without lost of generality, it is assumed that the leader robot (i = 1) is a virtual robot that will generate the trajectory that the remaining robots have to follow with specific time delay. In this way, the virtual leader robot is followed by three Turtlebot3 robots forming a string of four vehicles that requires the use of three delayed observers. The distribution of the experimental platform is illustrated in Figure 12. The gains used for the observers (12), and the gains for the control laws (30) are equal to the ones used for the numerical evaluation (NE) presented in Section 5.1. The first real-time experiment corresponds to the Lemniscate-type path, for whose results are compared to the simulated ones presented in Section 5.1; this case would be referred as RTE-a. The second case involves an oval track racing, that includes discontinuous velocity changes of the mobile robots, when entering and leaving a curve path; this case would be referred as RTE-b. Lemniscate-Type Path Real-Time Experiment (RTE-a) For comparison purposes, the first real-time experiment corresponds to the Lemniscate path given by (36), with the same parameters considered in the numerical simulation scenery presented in Section 5.1. The initial conditions of the mobile robots (i = 1, 2, 3, 4) and the delayed observers (i = 1, 2, 3) are shown in Table 3. The spacing policy is determined by Table 1. Figure 13 displays the evolution carried out by the mobile robots at the X-Y plane, where the convergence of the vehicles to the desired path is obtained under the consideration of non-null initial conditions errors. The convergence of the observation errors e w 1 i (t), e w 2 i (t), e w 3 i (t) is depicted in Figure 14, where after a transient period the observers states converge to the delayed trajectory of their respective i-th robot. The tracking position errors evolution is shown in Figure 15, it is evident how e s1 i (t), e s2 i (t), e s3 i (t) converge to the origin. The time-varying gap τ i (t) for i = 1, 2, 3 is shown in Figure 16, notice that the magnitude of τ i (t) increases only when the distance between the i-th and the (i + 1)-th robot is less than the desired safe distance r c i , thus, increasing the physical distance d i (t) between the robots to avoid getting too close to each other, see Figure 17, where the time evolution of d i (t) for i = 1, 2, 3 is depicted. In this case, the evolution of the distance d i (t) is measured by the Optitrack vision positioning system. Note that the time-varying gap activates at transients and at the curve sections of the desired path, where the trailing distance decreases, but such distance is smaller than the Euclidean distance that triggers the time-varying gap, thus rendering a conservative collision avoidance action. The evolution of the input signals ν i (t), ω i (t) for i = 1, 2, 3, 4 is shown in Figure 18, allowing to conclude bounded and continuous control actions. Oval Track Racing Real-Time Experiment (RTE-b) For the second experimental test, we considered that the virtual leader robot follows a reference trajectory that describes a racing oval track. This trajectory was built by means of the combination of straight paths, x(t) = gt, y(t) = br c and curve segments, x(t) = bh + r c cos(a + pt), y(t) = k + r c sin(a + pt) where r c = 0.5, h = 0.8, k = 0 and p = π 12 . Some parameters change according to the segment of the path, a = − π 2 , π 2 , b = 1, −1, and g = 0.08, 0.13, 0.16. Note that the fact that the trajectory is sectionally designed implies that there will be discontinuous points in the velocities of the leader robot (i = 1) that will be acting as a velocity disturbance for the chain of robots showing in this way the benefits of the observers. The initial conditions are shown in Table 4 and the gains used for the observers are λ 1 i = λ 2 i = 0.5 and λ 3 i = 0.3 for i = 1, 2, 3. For the control laws, the gains are The parameters for the spacing policy are also given by Table 1 were t s i is adjusted to t s i = 5[s]. Figure 19 displays the trajectories performed by each robot at the X − Y plane. The observation errors e w 1i (t), e w 2i (t), e w 3i (t) are depicted in Figure 20, where their convergence is clearly shown, while the tracking position errors e 1 i (t), e 2 i (t), e 3 i (t) are shown in Figure 21. Figure 22 shows the time-varying gaps between robots, while Figure 23 presents the relative distances between the robots. Note that when the robots enter the curve, the relative distance between successive robots decreases and the time-varying gap grows, yielding a safe trailing distance. Finally, the control signals are shown in Figure 24. Note that the considered trajectory implies that the angular velocity will be zero when the convoy travels on the straight line, and that the linear velocity decreases during the curve, causing the robots to come closer to each other. Results Discussion At the introduction of the article, the state-of-the-art approaches were established for platoon formation strategies, as well as, for leader-follower setups. While designing the proposed time-varying spacing policy, a comparison to constant spacing policy was carried out, concluding that keeping constant distance and bearing angle between each pair of successive vehicles hinders the follower robots' ability to perform the same path as described by the leader robot, mainly at curves. From these comparisons, it was evident that a varying distance and bearing angle are required at a curve. A similar comparison was performed with respect to time-based spacing policies, concluding that the space headway created by the separation time may render collisions when the translational velocity is small or tends to zero. However, for the sake of space, these comparison studies are not presented. When comparing the simulated (NE) results, Section 5.1, and real-time experiments for the Lemniscate path (RTE-a), Section 5.3.1, we note that convergence of the obser-vation errors is rather faster than the tracking errors, as established from the stability analysis, since the observations errors present exponential convergence, Lemma 1, while the tracking errors converge asymptotically, Lemma 2. In contrast, it can be seen that the time-varying gap and the associated relative distance ensures no collision between the mobile robots, preserving a minimum safe distance. Finally, as to show robustness, when sudden perturbations at positions and orientation are introduced, simulating lost or failure of sensors measurements, or communication channel problems, it can be seen that the observers filtered the peak changes on measurements values, and smoothness of the signals is propagated through the mobile robots chain. Note that by Assumptions 2, 3, and Remark 6, some properties are required for the translation and rotation leader velocities, these constraints are used to prove stability and convergence properties of the observation and tracking errors. However, in real practice scenarios, rotational velocity can be zero, as when moving in a straight line; however, connecting a straight path with a curved one implies discontinuities at the rotational velocity. Thus, in order to show that, even in these scenarios, the proposed controller behaves appropriately and convergence properties are kept, the oval path experiment of Section 5.3.2 was tested. Note that although the mobile robots presents discontinuities, see Figure 24, the stability and convergence properties of the observation and tracking errors is preserved, thus, it can be concluded that the observers help filtering such discontinuities and render smooth tracking trajectories for the platoon formation, and this filter behavior is propagated through the mobile robot chain. Remark 8. It is important to point out some drawbacks of the proposed time-varying gap formation strategy. First of all, it should be pointed out that the present chain strategy does not consider a specific obstacle avoidance strategy since at the transient response, when the tracking errors have not converged, there is a possibility of collision among the vehicles at the platoon; this risk is eliminated when the agents have converged to the desired trajectories due to the time-varying strategy. We also note that, even when the individual control of each robot allows the possibility to move backwards, the chain formation does not allow this situation since the leader trajectory has to be followed by all members of the formation. Backward movements could be possible in the case of a time-varying topology, allowing the last robot to become the new leader of the formation. Conclusions In this work, a control scheme for a platoon of mobile robots with a time-varying spacing policy, based on an input-varying-delay observer that estimates the delayed trajectory of the (i)-th robot, which should be considered as a desired path for the (i + 1)-th robot, was developed. The time-varying gap between each pair of successive robots is computed by means of a smooth function τ c i activated on an influence zone that depends on the distance between robots (i)-th and (i + 1)-th. It is formally shown, based on a Lyapunov stability analysis, how the estimation and the tracking errors associated with the chain formation of the vehicles converge to the origin, preserving the formation of the vehicles along any trajectory described by the leader robot in the workspace, due to bounded input velocity signals. The proposed formation strategy shows that when the robots approach each other in a slowdown velocity scenario, the time-varying gap raises its magnitude to increase the distance between each pair of successive robots at the platoon, avoiding collisions among them, rendering an efficient collision-free formation strategy. Real-time experiments are in line with the simulation results, supporting the convergence properties that were obtained by the stability analysis over the estimation and tracking errors. The robustness benefits of the considered observer are shown by numerical and experimental results on a Lemniscate-path type and an oval track race example, showing good performance of the proposed formation strategy, allowing us to conclude some robustness properties of the platooning strategy. It should be highlighted that the present strategy considers the complete kinematic model of the vehicles and not only a reduction model in one dimension or a punctual mass robot, as is usual in the literature. It is also important to mention that all the robots in the formation converge to the same trajectory generated by the platoon leader robot, and the fact that the considered delay observer acts as a natural filter for possible external or measurement disturbances. Finally, it is important to mention that the real-time experiments were carried out in a controlled environment; in a future work, we should consider the same formation problem for an outdoor experiment by adding onboard cameras to the follower robots, which could provide the relative distance and angles between the robots. Further, an obstacle avoidance strategy should be considered in a general solution of this formation problem.
10,031
2021-05-31T00:00:00.000
[ "Mathematics" ]
Thin Wall Ductile Iron Castings: Technological Aspects The paper discusses the reasons for the current trend of substituting ductile iron castings by aluminum alloys castings. However, it has been shown that ductile iron is superior to aluminum alloys in many applications. In particular it has been demonstrated that is possible to produce thin wall wheel rim made of ductile iron without the development of chills, cold laps or misruns. In addition it has been shown that thin wall wheel rim made of ductile iron can have the same weight, and better mechanical properties, than their substitutes made of aluminum alloys. Introduction From marketing investigations [1] follow that within the last 15 years participation of aluminum alloys parts in Americans cars increase twofold while in European increase by 5 up to 20 %. From this, among other things reason lately it is observed decline of cast iron production [2], with the exception of ductile iron. However, systematically increases production of aluminum alloys castings. Producers of lightweights alloy castings; particularly on the basis on aluminum alloys enters the market, which was up to now reserved for cast iron. The advantages from replacements of cast iron castings into aluminum alloys castings results, among other things from lower density of aluminum, which is equal about 0.38 of cast iron density. Such difference lowers cars weight and in consequence fuel consumption. According to estimates [3] the reduction of the car weight by 100 kg enables to save 0.5 to 1 liters of fuel per 100 km Additional advantages of aluminum alloys are as follows:  low melting and pouring temperature, that is why loading heat of molds is relatively low, what enables to use conventional permanent-mold casting, which increase dimension accuracy and quality of castings surface (disadvantage of this method is problem with production castings without porosity, causes lower mechanical properties, especially fatigue resistance),  non-magnetic character of aluminum, what facilitate scrap selection and its recycling,  high thermal conduction, which facilitate better engine cooling,  castings aesthetics. The problem with replacement cast iron into aluminum seems less profitably when mechanical properties are taken into account, eg. strength, elongation, fatigue resistance, stiffness, etc. both groups of alloys and unique properties of cast iron connected with graphite presence (eg. high damping capacity, lubricate ability of graphite in diverse tribology pairs). From analysis of mechanical properties it can be state that for a given elongation, tensile strength R m , yield strength R p0,2 and stiffness (assumed as ratio of yield strength R p0,2 to Young modulus E) of typical grade of ductile iron are much more higher than for aluminum alloys. Taking into account casting weight the strength indicators related to density γ of materials (indicator/γ) can be used. Then it is apparent that aluminum alloy and ductile iron ensure similar strength indicators (Fig.1a-c) [1]. An exception is austempered ductile iron (ADI), which has much better properties then remaining alloys. Valuable advantage of cast iron is its ability to bear longstanding cyclic loading, which usually is shown as a stressnumber of cycle dependency. From Fig. 1d follows that above 10 5 -10 7 cycles admissible stress for cast iron are higher than for aluminum alloys. Moreover, below a certain critical stress number of cyclic loads of cast iron tends practically to infinity without the presence of cracks. Such feature do not reveal aluminum alloys. It can be state that cast iron castings show much higher fatigue resistance and are safer in comparison to aluminum alloys. Taking into account strength indicators referenced to the materials density (indicator / γ), it appears that eg 1 kg of aluminum alloys and ductile iron, providing similar values of strength (Fig. 1). The exception is ADI, which considerably exceeds the other alloys. It follows that the ductile iron can be as light as aluminum alloy. In some cases (eg. internal combustion engine, housing of catalyst) important is influence the temperature on mechanical properties. From Fig. 1e follows that exceed the 100 o C, relative strength of aluminum alloys radically decreases. Above 200 o C relative strength of ductile iron especially with pearlitic matrix considerably exceed relative strength of aluminum alloys. So for application at high temperature favorable is to use cast iron. Next factors, which should be considered it is wear properties and dumping capacity. Cast iron possess ability for surface hardening, namely to generate hard and wear resistance surface layer and soft and plastic core (eg. timing gear). Significantly lower dumping capacity of aluminum alloys in comparison to cast iron ( Fig.1f ) generally leads to higher engine noisy. In general, from presented analysis it follows that cast iron castings characterize the same and in many essential cases -better mechanical and utilizable properties like alumnium alloys with the same weight. One of the essentials factor which should be also taken into account, when replacing castings made of cast iron into aluminum alloys is the factor of total consumption energy. Generally it can state that production of cast iron castings is connected with energy savings and moreover contrary to aluminum and aluminum alloys, cast iron can be remelted many times without deterioration its quality. Next and from the economics point of view the most important argument in aid of cast iron is much lower production costs of castings compared to aluminum alloys. Fig. 2 show individual costs of different materials essential for obtaining unit of R p0,2 . The following conclusion follows that castings cost either ductile iron or ADI is much lower than aluminum alloys. Summing up the foregoing considerations the following conclusion come into being that decision of substitute cast iron castings with aluminum alloys is not always the rational and must be preceded by analysis of many factors like: production cost, mechanical properties at ambient and at elevated temperature, fatigue resistance, energy consumption, compatibility with parts made of different materials, noise and corrosion resistance. From the presented analysis follows that there exists potential possibility to produce lightweight and cheap cast iron castings. In foundry practice, usually it is adopted that in order to avoid castings defects (eg. chills, misrun castings, and cold shuts) they must be cooled down with relatively low rate and from this reason minimal wall thickness of cast iron castings should be above 3 mm. However, taking into account potential possibility of mechanical properties of cast iron such wall thickness often is redimensioned. It seems that technology of production of thin wall ductile iron castings (TWDI), where wall thickness is below 3 mm should be worked out at the beginning of expansion of development aluminum alloys castings. Figure 3 shows the predicted cooling rates as a function of wall thickness for castings made with conventional molding sand, as well as a comparison with experimentally reported data. Problems with thin wall ductile iron castings technology are connected with "entering" into very high cooling rate range, which promotes chills and shortens the metal flow which results in the misrun, and cold shuts. Accordingly, the aim of this work is to produce the real thin wall ductile iron castings which can be lighter than substitutes made of aluminum alloys, but with better mechanical properties and definitely better damping capacity and lower final cost. [4], pointsexperimental data [5], lines ------experimental data [6] 2. Technological aspects Chills The method of obtaining thin-walled ductile iron with no chills, lies in the fact that the melted iron is chosen in such a way that its chemical composition and the type and amount of inoculant and spheroidizer to ensure in the casting with wall thickness s (expressed in millimeters) nodule count greater than obtained from the empirical formula [7] 4/3 2 18,8Si) 4,07C (23,34 s 1185200 The nodule count is influenced by the physical and chemical state of the liquid cast iron ie,: the chemical composition spheroidization and inoculation practice, type of inoculant and spheroidizing agent, quantity and granulation, Mg treatment and inoculation temperature and the bath superheat temperature and holding time as well as the time after of inoculation (fading of inoculation). Examples of the impact of technological factors on nodule count is shown in Figure 4 [8,9]. In general, the technological factors such as a high level of carbon and silicon, high inoculation intensities, low bath superheat temperatures and short holding times and short times after inoculation ensure a high nodule count and promote a decrease the chill of castings. Fluidity distance The high cooling rate (small wall thickness of casting) of thinwalled castings significantly shortens metal flow in the mold channels and may cause misrun and cold shuts. Fluidity distance (spiral length L) in mold channel with wall thickness of 1, 2 and 3 mm is shown in Fig. 5. In case of castings with constant wall thickness fluidity length depends on pouring temperatures and carbon equivalent (Fig. 6). In the thin-walled castings fed with a single inlet there is a high temperature drop during mould filling along the length of the casting (Fig. 7), which can cause structural gradients and misruns [10]. For this reason, it is preferable to use several metal inlets. In summary, the elimination of casting defects such as misruns, and chills in thin wall castings is achieved by increasing the pouring temperature and the carbon equivalent and the use of multiple supplying inlets. Thin walled automotive wheel rim made of ductile iron The aim of this work was to perform casting made of ductile iron as a counterpart of aluminum alloy. Wheel rim made of aluminum alloy was selected in order to show that it can be replaced by means of lighter casting made of ductile iron without detriment to its properties. Silica sand (1 K of a fraction 100-200 µm) and urea-furfuryl resin Kaltharz 404U (1.3 wt.%) with a hardener from a group of paratoluenesulfonic acids 100 T3 (0.5 wt.%) were used, and the foundry mold was made using a paddle mixer (Ms-017A). Melts were produced using an electric induction furnace. The raw materials were Sorelmetal, steel scrap and commercially Fe-Si alloy. The metal was preheated at 1500 o C and then poured into the mold, which was equipped with a inmold reaction chamber for spheroidization and inoculation processes. Figure 8 shows wheel rim made of aluminum alloy (mass 5.30 kg) as well as and their thin wall counterpart made of ductile iron casting (mass 5.23 kg). In Figure 9 it is shown cross section of casting of wheel rim made of ductile iron and location of metallographic examinations. For the quantitative metallographic studies microscope Leica MEF -4M coupled with automatic image analyzer Leica QWin was used. Analysis of these data indicates that the achieved casting is of ferritic -pearlitic matrix. Nodule count ranges from 1173 to 1353 mm -2 , and the ferrite fraction in the matrix ranges from 72 to 80%. It is worth noting that in the structure there is a lack of chills (Fig. 10). Nodule count calculated by the formula (1) is smaller than the measured values. This ensures the receiving the casting without chills. Research of wheel rim made of ductile iron have shown that sound thin-walled castings can be successfully produce without chills, which according to previous studies [4] has good indicators of the strength properties. The present paper deals with the technological aspects associated with the preparation of sound (without chills, cold laps and misruns), thin walled castings of ductile iron. In particular, it was shown that the production method of thin wall ductile iron castings should provide a higher nodule count of the number obtained from the empirical formula (1). In addition, the study shows that it is possible to produce thin wall ductile iron castings with considerable length. In thin walled castings fed with a single inlet temperature drops are high along its length. For this reason, it is preferable to use several supplying inlets. 2. It has been shown that thin wall wheel rim made of ductile iron can have the same weight, and better mechanical properties, than their substitutes made of aluminum alloys.
2,787.4
2013-03-01T00:00:00.000
[ "Materials Science" ]
X-ray diffraction study of structure of CaO—Al 2 O 3 —SiO 2 ternary compounds in molten and crystalline states Anorthite and gehlenite crystalline structure and short-range order of anorthite melt have been studied by X-ray diffraction in the temperature range from room temperature up to ~ 1923 K. The corresponding anorthite and gehlenite phases were identified as well as amorphous component for anorthite samples having identical shape to XRD pattern of the anorthite melt. The structure factor and the radial distribution function of atoms of the anorthite melt were calculated from the X-ray high-temperature experimental data. The partial structural parameters of the short-range order of the melt were reconstructed using Reverse Monte Carlo simulations . Introduction Materials based on the CaO-Al2O3-SiO2 (CAS) and MgO-Al2O3-SiO2 ternary diagrams are crucial for the Ukrainian national economy and other countries due to numerous applications. For example, slags based on these systems are applied in blast furnace production. Many welding fluxes and fluxes for surfacing are also known [1]. Gehlenite, anorthite, mullite, sillimanite and other synthetic compounds of the CAS system have been used at ZAO "Technohim" (Zaporizhia, Ukraine). It should be noted that the number of publications about properties and structure of main minerals of this system in liquid and crystalline states are relatively limited [2][3][4][5][6]. We performed investigation of the melts of three eutectic CAS oxide system samples [6] by high-temperature X-ray diffraction. The study of the ceramic welding flux based on the MgO-Al2O3-SiO2 system with CaF2 additives are presented in [7]. In according to [6] the eutectics exist at the anorthite region boundaries. Our previous high-temperature study [6] allows to conclude that the anorthite is present in all solid samples of the eutectics. The present work is devoted to investigation anorthite and gehlenite compositions which are only known ternary compounds of the CAS system. The study of anorthite and gehlenite both in solid and molten states is necessary to understand the nature of interaction in the CAS system, since the minerals that are formed are very important for physical chemistry, metallurgy and materials science. Experimental procedure At the initial stage, powders of Al2O3, SiO2, and CaCO3 reagents having especially pure grade were mixed in required ratios and were grounded by "Retsch PM 400" ball mill. The ternary compounds were synthesized by heat treatment in platinum-iridium alloy crucibles at 1750°C for 2 h using the Tamman furnace in the flow of high purity argon with further cooling to room temperature (RT). The cores of the samples were separated from the crucibles without touching the crucibles walls using diamond-coated drill. The resulting product was ground again in the same mill and then was further purified in a laboratory scale magnetic separator. Composition, melting points, experimental temperature and phase composition before melting according to XRD analysis are shown in Table 1. The powdered sample was placed in special molybdenum (Mo) crucible with the carefully smoothed out and polished inner surface to reduce interaction with investigated melts. The XRD study was performed by high-temperature X-ray θ-θ diffractometer using monochromatic MoKα radiation in a vacuum chamber filled with high-purity helium X-ray diffractometer. The design and experimental procedure of the diffractometer were described in [2,6,9,10]. The XRD pattern of anorthite at room temperature is shown in Fig. 1. The amorphous backgrounds of the XRD patterns of the anorthite at all investigated temperatures are shown in Fig. 2. The crystalline part of the XRD patterns of anorthite at different temperatures in comparison with the XRD of the anorthite melts are given in Fig 3. After amorphous background subtraction the all XRD patterns were normalized to maximum intensity of 1000 n.u. temperatures. There are two curves at RT: before (1) and after remelting (2). a) b) Figure 3. XRD patterns of crystalline anorthite at different temperatures (a) and anorthite melts (above); XRD patterns of anorthite at room temperature (b) before (1) and after remelting (2) The XRD patterns of gehlenite at various temperatures are shown in Fig. 4, 5. The subtraction of the amorphous background and normalization of the XRD patterns of gehlenite were performed in the same way like in the anorthite case. The following software was applied to analyze crystalline diffraction patterns: PCW, Match, X'Pert HighScore Plus, Diamond 3.2. (1) and after remelting (2). The phase diagram of CAS system is shown in Fig. 6. The red circles indicate the location of the ternary compounds that are studied in this work. The gray circles show the compositions (1,2,3) investigated in [6]. It should be noted that the anorthite position on CAS phase diagram is in the immediate vicinity of the mullite field, and gehlenite is much further. Therefore, the composition of the sample studied in [4] (green circle) even more enriched with calcium oxide and having structure in the molten state that may differ significantly. The XRD-pattern of molten anorthite at 1923 K was obtained by high-temperature θ-θ diffractometer (MoKα radiation). It should be noted that the anorthite melting point is in accordance with [8]. However, the XRD pattern of anorthite sample at 1820 K contains unstable crystalline peaks which does not match with mullite phase. The completely molten sample was obtained at 1873 K only. Nevertheless, the experimental temperature was increased up to 1923 K to avoid mentioned unstable effects observed near the melting point. The structure factor (SF) and radial distribution function (RDF) curves were calculated by self-developed software [6,7]. The structure models were reconstructed from experimental data (experimental SF) using Reverse Monte Carlo (RMC) simulations [11,12,13]. There are no experimental densities of the CAS melts therefore required melt density values at the investigated temperature was evaluated using the approach proposed in [14]. This method is based on the analysis of the RDF region before the first (main) peak. It allows estimating adequate density values of the investigated melts. Crystalline samples The XRD pattern of anorthite samples at RT and annealed at different temperatures contain reflections identified as crystalline anorthite phase as well as the rather strong background of amorphous component. An example of such XRD pattern at RT is shown in Fig.1. The amorphous contributions at all experimental temperatures are presented in Fig.2. These patterns have a certain similarity with each other and resemble a scattering curve from the melt. Therefore, it might lead to the preliminary conclusion that the crystalline and amorphous components have identical composition in samples. The reflexes of different XRD patterns of crystalline anorthite sample (up to 1773 K) were identified as the anorthite phase. The good agreement with peaks positions with minor difference in the peaks relative intensities were observed. There were no changes up to melting point with exception of an enlargement of the anorthite phase lattice parameters with temperature rise and the characteristic intensive amorphous background of anorthite XRD patterns. During solidification of remelted sample in the furnace cooling mode there is no enough time to form crystalline phases therefore an amorphous phase is also formed (Fig.1). It should be noted that both crystalline and noncrystalline scattering components have no changes even during long isothermal treatment. The anorthite sample XRD pattern was changed significantly (Fig. 3b) after melting and heat treatment for two hours at 1923 K and following fast cooling in accordance with the furnace cooling mode. It may be noted that after remelting the XRD of the solid sample has weak peaks of the anorthite phase and more intense peaks of the unidentified phase (phases) that was absent before melting. The non-crystalline component of scattering is very high and reminds practically the same after melting (Fig. 2). The amorphous background is practically absent on the XRD patterns of gehlenite ( Fig. 4) (unlike the XRD patterns of anorthite). The temperatures of this study are indicated in Table. 1 and Fig. 4. The melting point of gehlenite is 1866 K [8]. Unfortunately, our attempts to register a typical liquid curve were failed up to temperatures about 1700°C (the limiting operating of high temperature diffractometer). The XRD patterns of the gehlenite samples were interpreted as gehlenite phase below the melting point (up to 1773 K). The background amorphous component is practically absent before and after melting on the diffractograms. Like anorthite, the crystalline component undergoes significant changes after remelting. To interpret successfully obtained data we should assume that the gehlenite (and probably anorthite) partial decomposition into simple oxides and more complex CaSiO3 type double oxides have place. The absence of amorphous background on XRD pattern of the gehlenite can be explained by difference of their melt viscosity (2.5 Pa×s for anorthite and 0.27 Pa×s for gehlenite melts) [17]. In case of the anorthite the crystallization is complicated by higher viscosity of corresponding melt therefore the amorphous phase is formed. RDF analysis of the anorthite melt The first maximum on the experimental RDF has position at 0.167 nm. According to our data, it can be the superposition of Si-O (0.164 nm) and Al-O (0.169 nm) coordination contributions. Silicon and aluminum coordination numbers by oxygen were calculated by the formula: (1) describes the case of central cationic polyhedron surrounded by polyhedrons of the same type. So far as the atomic fractions and scattering factors (KMe) are slightly different they give very similar results. The oxygen polyhedrons with silicon and aluminum atoms inside (mostly tetrahedra) form joint nanogroupings. These alumosilicate nanogrouping (ASNG) based on close packed structure of oxygen atoms with Al 3+ occupying both octahedral and tetrahedral sites, Si 4+ occupying tetrahedral ones that forms in the molten and amorphous states. Calcium atoms are probably not a part of such groupings. The coordination contribution of Ca-O is observed at 0.223 nm (the position of the peak R1(Ca-O) = 0.223 nm) and the calcium coordination number by oxygen is quite large (about 9.5). This value of coordination number is undoubtedly too large to support joint Si-Al-Ca-O nanogroupings since the octahedral cavities formed in close packed ASNG will be small to accommodate Ca 2+ cations. According to obtained data the melt consists of Si-Al-O nanogroupings with calcium captions concentrated on the surface of the ASNG and followed by the outer layer of oxygen atoms. RMC simulations of the anorthite melt The simulated SF is in good agreement with experimental one (Fig7). It may be noted that unlike to most oxide melts the first maximum on SF is lower than the second one. The structural parameters obtained by both RDF and RMC methods correlate well with each other. For example, according to RDF analysis the R1(Si-O) is 0.164 nm and R1(Al-O) is 0.169 nm. The closest interatomic R1(Si-O) and R1(Al-O) distances obtained from partial pair correlations functions (RMC data) are equal to 0.163 and 0.171 nm, respectively (Fig. 9, a1). The corresponding CNs for both cases are close to 4 (Fig. 9, a2). Therefore, four oxygens are coordinated predominantly around the silicon or aluminum atom. The distance of R1(Ca-O) is equal to 0.219 nm (Fig.9, a1), CN = 8.7 (Fig.9, a2). This value is slightly lower than the one calculated from the RDF (9.5 atoms) but it is still high. Fig. 9, a2 shows also the partial pair correlation functions (gij(r)) for Al-Al and Si-Si bonds that are very similar to each other although expressed not clearly. In both cases the position of the first peak is close to 0.314 nm (R1(Al-Al) and R1(Si-Si) are 0.314 nm). Partial gAlSi(R) resembles gAlAl(R) and gSiSi(R), however, the R1(Al-Si) is somewhat larger (~ 0.318 nm). The similarity of gAlSi(R), gAlAl(R) and gSiSi(R) is probable due to occupation of Al 3+ , Si 4+ ions the same positions in the ASNG. According to the results of [6] the eutectic melts are characterized by the CN of aluminum by oxygen is between 4.3 and 5.3 [6], confirming 5-6 oxygen atoms surrounding aluminum. In the present work, the RDF calculation estimates ZAl(O) as ~ 4.05, and RMC provides ~ 3.9. Therefore, both methods indicate that almost all aluminum cations (like silicon) are in the oxygen tetrahedron centre. Aluminum (or silicon) atoms are surrounded by ≈2 atoms of the same kind and ≈2 atoms of the different kind (Si or Al, consequently). We assume that the low value of CN is caused by bonds of non-bridge oxygen with the Ca 2+ on the surface of the ASNG as a part of oxygen surroundings of silicon and aluminum cations. If aluminum and silicon cations were in oxygen tetrahedral microregions (which may occur in some crystals), ZAl (Al) and ZSi (Si) CNs would be close to 4, and ZСa (Al) and ZСa (Si ) would be about zero (~ 1 according to RMC method). Figure 9, c1 shows that aluminum cation surrounding includes approximately one Ca 2+ , and less than one Ca 2+ is coordinated around silicon (not shown in the figure). Obtained results support suggestion about the presence of the ASNG in the melt. The goo (R) has main maximum at 0.275 nm, which is characteristic of R1(O-O) distance in the grid of silicon-oxygen tetrahedrons, but it is somewhat shorter than in the aluminiumoxygen tetrahedron. Obviously, a slightly longer distance is also characteristic for AlO4 tetrahedron. In our opinion, a small influx goo(R) in the region of 0.32 nm is typical for R1(O-O) in calcium polyhedrons (not shown in the figures). The ZO(O) coordination distribution is broad (Fig. 9, c2) and demonstrates a significant contribution of large coordinations. The small ZO (O) contribution could be from the ASNG, while the large one forms the nearest oxygen surroundings of Ca 2+ . The obtained ZCa(O) values (between 8 and 12) is overestimated in contrary to ZCa(O) values close to 6 in slag melts [6][7]. The model of multicomponent silicate melt structure based on close packed shell of oxygen atoms, with all cations of the melt occupying existing tetrahedral and octahedral cavities, was also proposed in [9]. Smaller cations are located closer to the center of this grouping, whereas larger cations are closer to the periphery. However, such a model is unsuitable for the investigated melt and the eutectic melts [6]. In this study, the melt structure model of the same system is proposed in close to the mullite existence region. We have assumed that thermally stable ASNG are formed in the melt. However, these nanogroups are different than mullite-sillimanite type nanogroupings. As shown in [6], the negative surface charge of oxygen layer on the boundary of the nanogrouping is neutralized by Ca 2+ cations. These cations form a positively charged surface layer around the ASNG, which in turn is surrounded by oxygen atoms in the melt. The mullite-sillimanite type ASNG (close spherical shape) are based on the close packed oxygen atoms, whose tetrahedral cavities (according to our data) can be occupied with silicon or aluminum atoms, with some octahedral cavities filled by aluminum atoms. In anorthite type melts, the low CN of aluminum cations (~ 4) and other data indicate that aluminum cations are predominantly tetrahedrally surrounded by oxygen. By the way, the sample 1 in [6] (see Fig.6), whose composition is the most distant from the phase mullite region, is characterized by the smallest CN ZAl(O) ~ 4.1-4.5. In melt 2 (see Fig.6) ZAl(O) is 4.4-5.3, and ZAl(O) is ~ 4.9-5.1). In our opinion, six oxygen-coordinated Al 3+ , in the melt of anorthite, can only be in a disordered quasigas matrix, in which the ASNG are also located. This matrix is highly disordered and makes an insignificant contribution to scattering is highly. Therefore, groupings based on AlO6 3are not observed. The expansion of nanogrouping size with temperature increasing can lead to diffusion of aluminum atoms into the expanding octahedral vacant positions. Calcium cations with significantly larger size compare aluminum and silicon will be forced to occupy positions outside of the ASNG. Discussion Crystalline anorthite can be attributed to framework aluminosilicates (Fig. 10, a1). Alternating layers of pure aluminosilicate tetrahedra and layers that have cavities saturated with calcium cations exists in the crystalline structure anorthite. Figure 10, b1 shows fragment of the the anorthite unit cell in the form of coordination polyhedrons. As can be seen in Fig. 10, b1, all Si (yellow) and Al (green) polyhedrons are tetrahedral, and calcium polyhedrons consist of irregular polyhedrons of complex shape with CN of about 8-9. Tetrahedrons are linked by vertices, but calcium cation polyhedrons have no bonds with one another. The part of SiO4 and AlO4 tetrahedrons are connected with Ca 2+ polyhedrons even by faces. Mullite also belongs to framework aluminosilicates, forming infinite 3D-grid of aluminum-silicon-oxygen tetrahedrons in crystalline state. However, it contains aluminumoxygen octahedron groupings, with aluminum atoms playing the role of cations. The fragment of mullite structure is shown in Fig. 10, c2 (all atoms outside polyhedrons were removed). The SiO4 and AlO4 tetrahedrons and AlO6 octahedrons are highlighted here in orange and green, respectively. Crystalline mullite reflections (hardly distinguishable from the liquid melt curve) were observed for all melts investigated in [6]. Therefore, it can be suggested that mullite type ASNG are formed in samples near the melting point present work. However, this suggestion was not confirmed by subsequent analysis. The predominant tetrahedral oxygen cations of aluminum and the presence of calcium cations around the ASNG significantly distinguish the structure of the anorthite melt from the mullite one. Calcium cations were present in all eutectic compositions in [6], however, the ASNG in the anorthite and mullite melts significant differences. In anorthite and, possibly, in melt 1 from ref. 6[6] (Fig.6), the silicon and aluminum cations occupy mainly positions inside the tetrahedron of oxygen atoms, and in melts close to mullite region, aluminum cations partially occupy octahedral positions. Assuming that the substance will retain some similarity with its high-temperature crystal structure after melting we should expect that remaining of the some specific features of the crystal in the melt. Obtained data allow suggesting that the ASNG have the anorthite type in case of investigated melt and the melt 1 from ref. [6]. The mullite type of nanogrouping has place for sample 2 and 3 [6] (Fig.6). In contrary to [6], the crystalline mullite peaks were not observed in the investigated melts in anorthite type melt before and after melting, although probably some peaks (at least two in Fig. 3b) can be interpreted as mullite after remelting. Like [6] this paper supports suggestion about the presence of the ASNG, with their negative surface charge neutralised by calcium cations. Nevertheless, the ASNG of the investigated melts have some differences from the ASNG of the melts investigated in [6]. The ASGN in the anorthite melt have practically empty octahedral cavities. On the other hand, a significant part of the octahedral cavities contains aluminum atoms in the melts close to the mullite phase region. According to [8] the complex aluminosilicate particle existence transferred to an anode and Ca 2+ ions migrating to a cathode in the anorthite melt. Johnson et al [15] indicated that there are two types of oxygen ions in supercooled silicate melts (bridging and non-bridging). Bridging oxygens form bonds with two grid-forming cations, whereas non-bridging oxygens are associated with only one grid-forming cation. In accordance with [6], the contributions of noncrystalline and crystalline components should be established from the diffraction data obtained for anorthite before melting in the investigated temperature range. The non-crystalline component is represented by strongly disordered ASNG resembling the melt structure. The crystalline part consists of anorthite polycrystals. In general, after melt solidification crystals of the same type as before melting are usually obtained. The ASNGs formed during melting mainly retain other microgroupings in the nearest surroundings. They rapidly restore bond distances with one another and build a long-range order during solidification. However, in high viscosity melts, some particles that pass significant diffusion distances have no time to occupy their crystal lattice positions during fast cooling. They can interact with one another forming more simple compounds. Therefore, anorthite and gehlenite can decompose, and, as a result, their decomposition products are observed. Applying the high-temperature microscopy method, Welch et al [16] observed the appearance of corundum crystals and pseudovollastonite CaSiO3 upon cooling the anorthite melt from 1873 to 1673 K, although the remaining part of cooled sample consisted of various modifications of anorthite. Berezhnoi [8] suggested partial decomposition of anorthite during melting but appearance of corundum was explained by nonequilibrium (fast) crystallization of the melt. In our opinion, the crystalline part of anorthite is completely transformed into a melt structure upon melting. The non-crystalline part is also reconstructed but insignificantly since the non-crystalline SRO in many respects resembles a liquid one. At least at the initial stage, a homogeneous liquid structure of the anorthite melt is formed. Although crystalline peaks are still observed in the anorthite melt, but they are unstable and not close to mullite as shown in [6]. These peaks disappear after overheating and long exposures at high temperatures. Unfortunately, these peaks could not be interpreted since their instability during XRD experiment. In accordance with proposed model, the ASNG are formed inside of a disordered (quasigas) matrix in the anorthite type melt. The molten matrix consists of atoms and small atomic clusters, which have weak interaction and diffusion equilibrium with each other. The molten matrix interaction forces with the ASNG are significantly lower than forces inside of the ASNGs. They are close to Van der Waals forces by nature and depend on temperature significantly. In contrary to the bulk material, nanogrouping surface atoms fraction commensurates with their fraction inside. The presence of non-bridging surface oxygen atoms will lead to a negative microgroup charge. There are only calcium cations freely migrating in the molten matrix that can compensate this negative charge. Cations of calcium near the ASNGs surface will completely or at least partially compensate its negative surface charge. An oxygen layer of the melt matrix behind the calcium cations compensates the positive Ca 2+ surface layer charge. Therefore, the oxygen surroundings of Ca 2+ will consist partially of the ASNG nonbridging oxygens, and also the molten matrix that provides large CN value of calcium by oxygen. Some anorthite crystal images from well-known collection of CIF-files for crystals and processed using Diamond 3-2 are shown in Fig. 10. Figure. 10. Crystalline structure of the anorthite: a1) silicon (yellow), aluminium (green), calcium atoms (red) in the anorthite lattice. b1, c1, d1, a2) -polyhedrons junction in the anorthite unit cell. Crystalline structure of the mullite: c2) silicon-oxygen and alumina-oxygen tetrahedra surrounded by Ca 2+ (gray), d2) aluminium-oxygen tetrahedral network of one of the anorthite structures (calcium octahedrons are not shown). As mentioned above, the short-range order in the melts retains some features of the shortrange order of crystalline anorthite or mullite. During melting the formation of nanogroupings close to the regular polyhedrons of cluster type (for example, the Mackay cluster take places). The regular polyhedron of cations and anions is formed under the action of powerful surface and electrostatic forces. These forces are responsible for spherically symmetric nanoclusters at the atomic grouping of small size [Error! Bookmark not defined.]. The cluster has oxygen close packed structure. Octahedral and tetrahedral cavities are occupied with Si 4+ and Al 3+ cations with practically identical ionic radii. Slightly large cations (for example, Al 3 + ) can also fill octahedral cavities. A simplified model of the structure of anorthite or mullite melt is shown in Fig. 11. The figure shows only oxygen (yellow) and (red) calcium atoms. The ASNG is constructed on the principle of the Mackay cluster by cations Al 3+ and Si 4+ . In the case of anorthite, only tetrahedral cavities are occupied, and in the case of mullite, part of the octahedral cavities is occupied. Clusters of the smallest possible size can be present in disordered (quasi-gas) matrix. The cations Si 4+ , Al 3+ are not shown, because the pattern is difficult to perceive. 5.Conclusions Anorthite and gehlenite demonstrated the absence of phase transitions in the temperature range between RT and melting point. The slight enlargement of lattice parameters with temperature rise was only detected. The features of aluminum-silicon-oxygen grid structure observed in anorthite crystals differ from the crystalline structure of eutectic samples studied in [6] that cause the difference in the aluminum-silicon-oxygen nanogroupings of the corresponding melts. Silicon and aluminum are predominantly tetrahedrally coordinated by oxygen in the investigated melt. The interatomic Si -O and Al -O distances in melts are consistent with those in crystals and melts of other oxide systems. Oxygen atoms form the nearest surroundings of Ca 2+ at the distance of 0.223 nm with coordination number in the range between 8 and 10. It is evident that some distinctive structure elements of crystalline anorthite are retained in the anorthite melt. Negatively charged ASNG are formed in investigated melts. This negative charge is compensated by Ca 2+ cations that saturate the disordered (quasi-gas) matrix. The matrix consists of ions and small atomic clusters. The randomly distributed aluminum-silicon-oxygen nanogroups in the matrix resemble nanocrystals.
5,769.4
2020-01-01T00:00:00.000
[ "Materials Science" ]
Database Programming Languages (DBPL-5) Query facilities in object-oriented databases lag behind their relational counterparts in performance. This paper identifies important sources of that performance difference, the random I/O problem and the re-reading problem. We propose three techniques for improving the execution of object-oriented database queries: reuse/out of order execution, memoization, and buffer replacement policy. Schedule level optimization is introduced as our framework for integrating these techniques into query processing systems. Introduction Performance is an important consideration for the query component of object-oriented database systems.The success of the relational data model is due largely to the ability to provide suitable performance for query processing.The same may be true for the success of object-oriented databases.The nature of object-oriented databases leads to different performance characteristics than relational databases -assumptions and techniques from relational query optimization do not always apply.The work in this paper addresses one of these cases.We begin by presenting a characterization of a performance bottleneck that is not present in relational database systems, followed by an analysis of the failings of current optimization techniques.In section 4, we analyze this bottleneck and derive a suite of optimization techniques.Section 5 presents schedule level optimization, our framework for implementing the techniques from section 4. Perils of Object-Oriented Queries Object-oriented databases allow objects to reference other objects.In particular, this means that objects in one set can reference objects in another set.This leads to a situation where two objects may share a third object.Pages that contain shared objects are accessed many times during the course of one iteration through the referencing set. SelectEmps; e e:dept:size 25 Figure 1: A typical object-oriented query Query 1 traverses the elements of the source set Emps using the path-expression e.dept.size to follow references to objects in the target set Departments, and eventually, to the the size attributes of those Departments.The source set is traversed sequentially, and each object or page of objects is accessed only once.We cannot guarantee that the objects or pages in the target set will be accessed sequentially.We also cannot guarantee that a shared object or page will be resident in the buffer pool when it is needed, leading to the possibility that such pages will be read from disk multiple times.These multiple accesses result because two or more objects in the source set share a single object in Scheduling Resource Usage in Object-Oriented Queries the target set.The ability to capture this kind of object sharing is a principal feature of object-oriented data models.For the sample query, it is likely that more than one Employee works in each Department, which means that multiple Employee objects will reference each Department object.Object sharing leads to two performance problems.First, it causes a great deal of non-sequential I/O, since the target set's traversal order is determined by the source set's traversal order, and the two orders are unlikely to be similar.We'll refer to this as the random I/O problem throughout the rest of the paper.Second, the combination of object sharing and a finite buffer pool can cause the same object or page of objects to be retrieved from disk multiple times, since multiple objects in the source set refer to individual objects in the target set.We'll refer to this as the re-reading problem throughout the rest of the paper, and the "extra" reads of the same object will be called re-reads.These two problems are related, but much more effort has been focused at the random I/O problem than at the re-reading problem.Solving the re-reading problem also (partially) solves the random I/O problem, but solving the random I/O problem does not necessarily solve the re-reading problem.We explore this in more detail in section 3. The focus of our work is on eliminating re-reads, which takes the re-reading problempoint of view.As a side effect, we will also reduce the number of random I/Os, thus addressing the random I/O problem. Previous Approaches Many techniques for improving the performance of object-oriented queries have been proposed.In this section we examine those techniques which are relevant to either the random I/O problem or the re-reading problem.We also consider how well each technique addresses each of the two problems. Indices [12], and path indices [24,22] in particular, are an established method of improving object-oriented database query performance.Path indices eliminate I/O operations, and most of these I/O operations are random I/O operations, so the number of random I/O's is also reduced.The indices are only useful for particular path expressions, and do not improve the performance of other portions of the query which access the same objects through a different path.Path indices cannot reduce the number of re-reads of a shared object, since the index manager has no memory of which objects have already been read, and even if it did, it would have no way of taking advantage of such information. Clustering [6] and prefetching [27,14] are also common performance enhancements in object-oriented databases.Clustering addresses the random I/O problem because it retrieves related objects in a single I/O operation, again eliminating I/O operations which happen to be random I/O's.Clustering is not guaranteed to improve the performance of other parts of the query that happen to use the objects retrieved by the cluster.Clustering is in the same position as indexing when it comes to addressing the re-reading problem. Prefetching attempts to reduce I/O's (and therefore random I/O's) by ensuring that required objects are in the buffer pool when the query needs them.A system that prefetches may be able to prefetch objects that are being read again, but that doesn't really address the re-reading problem unless the object being read again can be retained in the buffer pool.Recent work [17] has noted that the effectiveness of prefetching decreases as the page size increases, and more importantly, as the degree of sharing (number of references to shared objects) increases. Object assembly [20] controls the order in which unresolved object references are resolved.In general, the strategies employed by object assembly reduce the number of random I/O's performed by the query.Object assembly is unable to improve the performance of other parts of the query which reference objects that have already been assembled.The assembly operator does not explicitly attempt to address the re-reading problem.It does have limited effect on re-reads because it alters the order of object references which in turn determines the order in which pages are flushed from the buffer pool, but this improvement is due more to fortuitous referencing patterns than explicit plans to reduce re-reads. One method of solving the random I/O problemis to sort the objects by their physical addresses.This is very effective at reducing random I/O's.It is not an effective solution to the re-reading problembecause the objects may be paged out of the buffer pool before they are re-read.In that case, the objects will have to be read into the buffer pool again. The usual mechanism for addressing the re-reading problem is to make use of the database buffer pool [15], and hope that objects which are re-read remain in the buffer pool between the reads.Sacco and Schkolnick's work on hot sets [29,30] computes the number of pages required by a particular query as an aid to access-path selection.They also note that different kinds of access patterns benefit from different buffer management policies.Subsequent work in this Fifth International Workshop on Database Programming Languages, Gubbio, Italy, 1995 area has focused on further analysis of the relationship between access patterns and page replacement and on making the best use of the buffers available at query execution time [9,25]. None of these methods directly address the re-reading problem, other than relying on the buffer manager to do a good job.They make no attempt to explicitly minimize the number of re-reads.Instead, responsibility for reducing re-reads is left to the buffer manager.The buffer manager is only able to do this as well as its replacement policy allows.One way to improve on the performance of queries with re-reads is to "extend" the size of the buffer pool by improving the page replacement policy.Chan et.al. [5] use hints to the buffer manager to improve replacement selection.These hints are encoded via user definable priorities.They do not describe any schemes that address the re-reading problem.The LRU-K [26] algorithm remembers the timestamps of the last k references to a page, in an effort to distinguish between frequently and infrequently referenced pages, which does better than LRU, but is still not the best for situations with lots of sharing, since the last k references to an object are not a good indicator of how many more references to that object will occur. Cornell and Yu [13] described a method for integrating buffer management with the query optimizer.Their method focuses on determining which relations should be kept in the buffer pool, and using that information to prune the set of access plans under consideration.This doesn't address any of the issues related to the re-reading problem, and in particular no reduction in re-reads occurs. Chen and Roussopoulos [7] cache the results of queries.If the result of a query has been cached, then this technique addresses both the random I/O problem and the re-reading problem.Query result caching does not help the first time that the query is executed, nor does it help if the cache has been flushed.Kemper and Kossman [21] propose a dual buffering scheme, where the buffer pool is divided into two segments, one dedicated to buffering pages, and another dedicated to buffering objects.Dual buffering allows useful objects on a page to be buffered "individually" if the rest of the page that they occupy is not useful.This eliminates wasteful use of memory in the buffer pool caused by internal fragementation of buffer pages, and generally improves query performance. Kinds of Performance Improvements The re-reading problem has two major sources.The first cause of the re-reading problem is that multiple operators in the same query can refer to the same objects.An example of this situation is the case where some path expression is used in the predicate of more than one operator in a query.Depending on the execution order determined by the optimizer, the objects referenced by the path expression will be accessed twice, once for each operator.The query in Figure 2 is an example of this kind of query.The set Departments is accessed both as one of the inputs to the join, and as one of the components in the Select's path expression.This case is often addressed by common sub-expression elimination techniques [11,28,16], but there may be additional opportunities for performance improvements when a subset of the objects described by the processing of the common subexpression are used in another part of a query.Common subexpression elimination is a source level analysis technique and has no notion of whether the objects that are produced by the common subexpression (or its intermediate values) will be needed by parts of the query which do not contain the source level common subexpression. The second cause is that within a single operator over a set type, multiple objects in that set use some attribute to reference objects in another set.Multiple objects from the source set (the parameter to the query operator) may reference the same objects in the target set (specified by a prefix of the path expression).Query 1 is an example of this kind of query. All of our optimizations share the notion of common work elimination, that is, we seek to eliminate all unnecessary read operations, even those that are undetectable by source level common subexpression elimination.We propose three classes of methods for providing performance improvements for object-oriented queries: reuse/out of order execution, memoization, and buffer replacement policy.Reuse and buffer replacement policy attempt to increase the effectiveness of the buffer pool, thereby eliminating re-reads and I/O operations.Memoization also has common work elimination as its goal, and is used in situations where reuse/out of order execution is not permissible. Fifth International Workshop on Database Programming Languages, Gubbio, Italy, 1995 Reuse / Out of Order Execution In an ideal world, each object referenced by a query would be read into memory once, regardless of how it was referenced.After that, the object would be retained in the buffer, and any subsequent references would not cause additional I/O.This could only happen if the buffer pool is infinitely large.This idealized situation provides a valuable intuition for a new kind of optimization.Our intuition is that the first time an object O is read into the buffer pool, we want all operators that will perform a computation using O to perform their computations before O leaves the buffer pool.These operators are reusing the work done by the operator that actually caused O to be retrieved from disk.If we could find a way to allow these other operators to execute the slice of their execution related to O, then we can ensure that O will not be read from disk again in the future. One method of realizing this intuition is to allow the query to execute out of order: During the evaluation of the query, we allow the flow of control to leave the execution of one operator and enter the execution of another operator.This happens to a limited extent in pipelined execution models [18], and what we propose is a generalization of pipelining.In a pipelined implementation, plan operators are implemented as coroutines, with control passing from coroutine to coroutine in a linear sequence corresponding to the ordering of the physical plan.For example, in Join(Select(A,f1),B,f2), each time a tuple of A is processed, control begins with the coroutine for the Select, and is transferred to the coroutine for the Join.This is a restricted form of out of order execution.Each object or relational tuple starts at the coroutine for the innermost plan operator, and passes through all the coroutines for the plan operators enclosing that inner most operator before the next object or tuple is processed.The order of execution is "out of order" compared to an implementation where each plan operator is implemented as a procedure operating on entire sets.The order of execution is still in an order that is specified by the query, however the iteration takes place at a smaller granularity. Our notion of out of order execution is a generalization of this idea.Pipelined implementations restrict the transfer of control to be between an operator whose output is connected to the input(s) of another operator.We generalize this by removing this restriction, allowing transfer of control between plan operators whose outputs and inputs are not directly connected.As long as the output type of one operator matches the input type of another operator, transfer of control may occur, subject to constraints regarding set overlapping and coverage.As an example, consider the query in figure 2. JoinDept; SelectEmps; e e:dept:size 25; d e d:mgr:sal e:debt We assume that the Join is evaluated via a nested loops algorithm, and the selection via sequential scan.We assume the file containing Departments is structured so that the Manager of a Department is clustered together with the Department.Furthermore, we assume that at least one Employee works in every Department.A typical physical plan for this query appears in figure 3.In this example, the collection Department is traversed twice, once by the LoopJoin LoopJoinDeptMgrCluster; LoopSelectEmps; e e:dept:size 25; d e d:mgr:sal e:debt (since it appears as one of the join inputs), and once by the LoopSelect (via the path expression e.dept.size).If the plan is executed by executing the selection before beginning to process the join, the selection will cause all of the Departments to be read into memory (this is guaranteed because every Department has at least on Employee in it).If the selection is sufficiently large, those Department objects which were read least recently will have been flushed from the buffer pool by those Departments referenced more recently.Those "early" Departments must be retrieved from disk again to process the join. Fifth International Workshop on Database Programming Languages, Gubbio, Italy, 1995 Select(...) Read Employee Read Department Join(...) .size>= 25 Under out of order execution, the Select's selection condition is processed when an object is read from Emps.Before proceeding to the next Employee, execution of the plan switches to the LoopJoin operator, which evaluates that portion of the join which can be evaluated given the Department object that was retrieved by the path expression in the selection.After this portion has been evaluated, execution of the selection resumes.This flow of control is diagrammed in figure 4.This strategy results in a reduction in the number of I/O operations, since the Department objects for a particular Department and Employee pairing in the join are only read once.Unfortunately, Department objects are still read more than once overall, since each Employee in the selection must be compared to each Department.We can improve this by recognizing that objects are retrieved in units of pages; when we retrieve an Employee, we "join" it with all the Department objects on all the Department pages in the buffer pool.A small amount of in memory bookkeeping is required to ensure the correctness of the result. This technique is only applicable when we can guarantee that the set of objects to be traversed by out of order execution is the same as the set of objects that would be traversed by a normal order execution.Constraints on the containment relationships between sets, along with information from the schema manager of the database allow us to infer the necessary relationships at query compile time. Memoization Out of order execution allows us to reuse the intermediate results of computations by altering the flow of control during the execution of the query.Unfortunately, it is not applicable in all situations, because it is not always possible to determine which objects are actually being reused.Function memoization is a common technique for improving the performance of functional programs, and indexing is a special case of memoization.We can employ a form of memoization to improve the performance of those queries which cannot be improved via out of order execution.The UnionselectEmps; e : isPrimee:mgr:dept:size; selectEmps; e1 : e 1 :mgr:dept:size < 10 AND e1:wife:salary > $60k Figure 5: A query amenable to memoization common subexpression e.mgr.dept in figure 5 seems like an ideal candidate for reuse.Assuming that the left Select argument to the Union operator is evaluated "first", we can take the value of e.mgr.dept that is computed by the left arm of the Union, and then evaluate the right arm (Select: : : ; e 1 :mgr:dept:size) out of order using the value of e.mgr.dept from the left arm.Unfortunately, this does not work, since the Select in the right arm also needs the value of e1.wife.salary,which cannot be guaranteed to be in the buffer pool at the point when we wish to evaluate Fifth International Workshop on Database Programming Languages, Gubbio, Italy, 1995 e1.mgr.dept.size.If a reusable expression is conjoined with an expression which will cause disk I/O we cannot use out of order execution, since we cannot guarantee that the I/O operation will not flush needed objects from the buffer pool.However, we are able to use memoization to prevent the path expression e.mgr.dept from being read twice.When the left arm of the Union operator is processed, the implementation of the left Select operator writes a memo file for e.mgr.dept(even though it is evaluating isPrime(e.mgr.dept.size).The implementation of the right Select operator reads from the memo file for e.mgr.dept,instead of Emps.This eliminates the intermediate traversal of the Managers during the evaluation of the path expression.In this situation, memoization involves building a path index incrementally.The memoization can be improved if the left Select only writes entries whose Employees satisfy the condition e1.mgr.dept.size¡ 10 (from the right Select) into the memo file.The memo file then contains precisely those objects which satisfy the left conjunct in the right Select's predicate. Buffer Replacement Policy Both reuse/out of order execution and memoization address the re-reading problem when different parts of the same query access objects multiple times.They are not effective for the case where re-reads occur in a single query operator.Changing the buffer manager's page replacement policy can be used to address the case where re-reads arise in a single operator as a result of object sharing.If a shared object can be retained in the buffer pool until it is referenced again, then the work that was done to read the object from disk is reused by subsequent accesses to that object, as long as the shared object remains in the buffer pool.This has a decidedly flavor from reuse/out of order execution and memoization.Yet it is consistent with our aim of reusing common work, since the "initial work" of retrieving an object from disk is reused by subsequent references to the object.All buffer management algorithms have this property.Our contribution is to provide a policy that is tailored to path expressions, where object sharing is commonplace. Recall that the source of the difficulty is that multiple objects in a source collection reference a single object in a target collection.In the case where a single level of referencing in involved, we can use reference counts from the source objects to the target objects as part of the page replacement metric.For multiple levels of referencing, we simply treat each single level case in the multi-level path expression.The replacement policy computes the average reference count for a page of objects.The values of the reference counts partition the set of pages into generations, much like the generations that occur in generational garbage collectors [31].Representatives of multiple generations are present in the buffer pool at any point in time.The replacement policy replaces pages on a priority basis, assigning the lowest priority to the generation with the smallest reference count.Within each generation, pages are replaced using an LRU policy.As a special case, the generation for reference count = 1 can be restricted to a single page, since we know that the only reference to that page has already occurred.This provides FIFO behavior for scan like queries.As an extension we cause the reference counts for pages in a generation to decay as the objects within it are referenced.This prevents thrashing in the lower generations and gives a more accurate estimate of the remaining references to the page. Schedule Level Optimization We address the re-reading problem and the random I/O problem by providing a framework for the three kinds of optimizations discussed in section 4.This framework introduces a new level to the optimization process, the schedule level, which takes place after both logical plan rewriting and physical plan generation.At the schedule level, each physical plan operation is expanded into a sequence of schedule level operators.Schedule level operators form an assembly language for query I/O.The operators include instructions for reading an object from a file, comparing objects, extracting object fields, etc.The implementation of physical plan operators as macros over schedule level operations allows the scheduler to have explicit control over disk I/O operations and intermediate results.It also allows multiple custom implementations of operators to exist simultaneously and provides ability to jump into and out of "the middle" of physical plan operators.We can also reorder schedule operators in order to improve the performance of the query.This notion is reminiscent of optimizations that are used in compilers, such as function inlining, peephole optimization, or instruction scheduling.The schedule level optimization process follows these steps: Fifth International Workshop on Database Programming Languages, Gubbio, Italy, 1995 1.The physical plan is converted into an intermediate representation called a schedule graph.The schedule graph emphasizes physical "partitions" of logical collections and enables transformations on those partitions.The inital conversion is accomplished by using templates which map physical plan operators onto schedule graphs. 2. The graph representation is modified using rules that allow nodes in the graph to be deleted, combined, and replaced.The rules embody transformations for reuse and memoization, and use meta-data and inclusionrelations between partitions to detect opportunities for applying the optimizations. 3. The resulting graph is used to statically allocate buffer pages to the various partitions, in an attempt to make best use of the buffer pool.Thus, each partition is assigned a buffer, and each buffer can be controlled by a different page replacement policy.This is a generalization of work relating access patterns and page replacement policies.We use our reference-count based page replacement policy to manage the buffers for partitions that participate in path expressions. 4. The graph is input to a code generation algorithm which generates an executable sequential program. Compile-Time Buffer Allocation Each schedule operator is allocated a private buffer pool, which may be shared with other schedule level operators.This differs from traditional database systems where all physical plan operators share a single buffer pool which holds objects of many types.Segregation of types allows us to tightly control the behavior of objects with respect to the buffer pool.The possible disadvantage of this technique is that it may fail to be responsive to global properties of the query.The allocation of buffers to the various types is determined at compile time, using a heuristic that uses the fanout of pages referenced to determine the buffer allocations. Buffer Page Replacement Policy When static buffer allocation occurs, the scheduler can query the database schema manager to get information about the degree of object sharing via references from a particular collection.If the reference counts for a partition exceed a threshold value, the schedule uses the RefBuffer policy to manage that partition. Algorithmic Code Generation The code generation algorithm takes the schedule graph, along with the buffer assignments and generates a sequential program that can be executed to evaluate the query.The program is a sequence of instructions in "an I/O assembly language".The basic operations of this language include reading an object (page of objects) from a data structure (disk file, b-tree index, hash-table, etc.), applying a function to an object (page of objects), comparing fields of an object with some other value (including other object fields), and propagating objects based on some boolean condition.The high level structure of the code generator is analogus to that of a compiler.We define a notion of basic blocks over the schedule graph, and use dependencies among these blocks to induce a linear ordering on them.Using this linear order, we can then generate an instruction sequence for each node in the schedule graph.At the appropriate points in the instruction sequence the algorithm inserts code to handle reuse. When the code generator has been run on 3, the output appears as in figure 6.The basic schedule operators function like this: the Readoperator reads an object from a file, the Applyoperator applies an attribute to an object (possibly causing a disk read), the ApplyBuiltInoperator provides a mechanism for operating on basic types, the Filteroperator produces its first argument as output if its boolean (second) input is true, and the Outputoperator sends an object to the result file.In addition, there are also less familiar operations in the schedule.The BinaryTupleoperator produces a tuple containing its two arguments.The BufferApplyoperator applies an attribute to every object of a particular type that is currently in the buffer.In figure 6, the BufferApply(buffer(d),.mgr) means that the .mgrattribute will be applied to every Departmentobject in the buffer that holds d.Likewise the CrossApplyoperator generates the cross product of its first argument with every object in the buffer for its second argument.In addition, it makes a log of every object in that buffer which participated in the cross product.This log is then made available for use by the LogReadoperator. Fifth International Workshop on Database Programming Languages, Gubbio, Italy, 1995 that supports these optimizations, and an overview of the techniques used in our schedule-level optimizer was presented.The possibility of schedule-level optimization opens a new space of options for improving query runtime performance. Figure 2 : Figure 2: A query amenable to reuse
6,388.2
1995-09-06T00:00:00.000
[ "Computer Science" ]
Changes in the TBARs content and superoxide dismutase , catalase and glutathione peroxidase activities in the lymphoid organs and skeletal muscles of adrenodemedullated rats Thiobarbituric acid reactant substances (TBARs) content, and the activities of glucose-6-phosphate dehydrogenase (G6PDh), citrate synthase (CS), Cu/Znand Mn-superoxide dismutase (SOD), catalase, and glutathione peroxidase (GPX) were measured in the lymphoid organs (thymus, spleen, and mesenteric lymph nodes (MLN)) and skeletal muscles (gastrocnemius and soleus) of adrenodemedullated (ADM) rats. The results were compared with those obtained for shamoperated rats. TBARs content was reduced by adrenodemedullation in the lymphoid organs (MLN (28%), thymus (40%) and spleen (42%)) and gastrocnemius muscle (67%). G6PDh activity was enhanced in the MLN (69%) and reduced in the spleen (28%) and soleus muscle (75%). CS activity was reduced in all tissues (MLN (75%), spleen (71%), gastrocnemius (61%) and soleus (43%)), except in the thymus which displayed an increment of 56%. Cu/Zn-SOD activity was increased in the MLN (126%), thymus (223%), spleen (80%) and gastrocnemius muscle (360%) and was reduced in the soleus muscle (31%). Mn-SOD activity was decreased in the MLN (67%) and spleen (26%) and increased in the thymus (142%), whereas catalase activity was reduced in the MLN (76%), thymus (54%) and soleus muscle (47%). It is particularly noteworthy that in ADM rats the activity of glutathione peroxidase was not detectable by the method used. These data are consistent with the possibility that epinephrine might play a role in the oxidative stress of the lymphoid organs. Whether this fact represents an important mechanism for the establishment of impaired immune function during stress remains to be elucidated. Correspondence Introduction Oxygen free radicals (superoxide, [O 2 -] and hydroxyl radical [OH • ]) and hydrogen peroxide (H 2 O 2 ), called reactive oxygen species (ROS), play a significant role in the antibacterial and antitumorigenic capacity of macrophages and neutrophils, but they are also capable of presenting a toxic action on self tissues causing lipid peroxidation (1,2).In fact, the deficiency of vitamin E has been reported to facilitate the increase in ROS concentrations leading to functional changes in the immune system (1,3).For instance, chemical antioxidants such as ß-carotene (4), vitamin E (5), vitamin C (6) and reduced glutathione (GSH) (7) improve the proliferative capacity of lymphocytes, and increase host defense and immunoglobulin synthesis (8).Evidence has been presented that stress may influence the function of the immune system of humans and experimental animals (9) and modify the pro-oxidant capacity of macrophages and neutrophils.These effects may cause a decrease in the function of these cells and modify the immune response to viruses and bacteria.Most of the investigations on this subject have focused on the role of glucocorticoids (10)(11)(12).Indeed, glucocorticoids present a high capacity for inhibition of ROS production by macrophages and neutrophils (13).However, during stress the sympathetic nervous system is stimulated and epinephrine is secreted from the adrenal medulla (14).Furthermore, as is the case for all cells responsive to epinephrine via cAMP generation, immune cells display ßadrenoceptors (15,16).In fact, epinephrine increases the proliferative capacity of CD4+ and CD8+ cells via α-receptors, but inhibits that of CD4+ cells via ßadrenoceptors (17)(18)(19).Thus, it was proposed that when cAMP concentrations increase in the lymphocytes, the proliferative capacity of the latter is inhibited (20).Cannon (21) suggested that cAMP might not be considered only an immune inhibitory agent.Munck et al. (22) also reported that "the increase in the concentration of cortisone in the plasma during prolonged exercise may serve to protect the body against the excessive activation of its immune defenses during stress".This means that epinephrine may act as a modulating agent of human immunity, in addition to glucocorticoids. Recent work from our laboratory has shown that epinephrine stimulates H 2 O 2 production in incubated rat macrophages (23).However, the effect of epinephrine observed in vitro does not imply that this hormone plays a physiological role for the control of immune function.In addition, whether the generation of H 2 O 2 by epinephrine is linked to changes in the activity of antioxidant enzymes remains unknown.Taking into account the involvement of ROS in the immune function, it is important to investigate the effect of epinephrine on antioxidant enzyme activities, the major form of cell defense against acute oxygen toxicity (24), and lipid peroxidation.In the present study, the physiological effect of epinephrine (by bilateral removal of adrenal medulla (ADM)) on the activity of superoxide dismutase (SOD), catalase and glutathione peroxidase (GPX) of the lymphoid organs (mesenteric lymph nodes (MLN), spleen and thymus) was examined.For comparison with nonimmune tissues, skeletal muscles (soleus and gastrocnemius) were also studied.Enzyme activities involved in the generation of reducing power were also measured: glucose-6-phosphate dehydrogenase (G6PDh), indicative of the flux of substrates through the pentose-phosphate pathway (25), and citrate synthase (CS), an indicator of the flux of substrates through the Krebs cycle (26).As an indication of the occurrence of lipid peroxidation under these conditions, the content of thiobarbituric acid reactant substances (TBARs) was also determined. Reagents and equipment All chemicals and enzymes were obtained from Sigma Chemical Co.(St.Louis, MO) and Boehringer Mannheim (Germany).The solutions were prepared with twice-distilled, Millipore Milli Q deionized water.All measurements were performed using Zeiss DMR-10 and Gilford (Model Response) spectrophotometers. Animals Male Wistar rats weighing 180 g (about 2 months of age) were obtained from the Institute of Biomedical Sciences.The rats were maintained at 23 o C on a 12-h light:12-h dark cycle. Adrenodemedullation The rats were anesthetized by ether inhalation and the kidneys were exposed through the dorsal side.A small cut in the cortex of the adrenal glands was made so that the medulla could be removed by light pressure on the gland.Enzyme activities and lipid peroxidation were measured 35 days after adrenodemedullation.A similar procedure has been previously used (27). Experimental procedure The rats were always killed between 8:00 and 11:00 a.m. by cervical dislocation without anesthesia.The thymus, spleen, MLN, gastrocnemius (white portion, type IIb fibers) and soleus (type I fibers) muscles were then excised and maintained in liquid nitrogen prior to measurements of the enzyme activities and TBARs content. Enzyme assays The extraction medium for the measure-ments of Cu/Zn-and Mn-SOD, catalase and GPX activities contained 0.10 M sodium phosphate, pH 7.0.For the SOD assay (28), the homogenate was centrifuged at 10,000 g for 30 min.Cu/Zn-and Mn-SOD activities were measured by following the dismutation of KO 2 at 250 nm.The procedures used for catalase and GPX assays were similar to those used by Beutler ( 29) and Maral et al. (30), respectively.Catalase activity was determined by measuring the decomposition of hydrogen peroxide at 230 nm.GPX activity was measured by following the rate of oxidation of the reduced form of glutathione.The formation of oxidized glutathione was monitored by a decrease in the concentration of NADPH, measured at 340 nm, due to the addition of glutathione reductase to the medium. Determination of TBARs Substances that react with TBARs were measured as described by Winterbourn et al. (33) in the same extraction medium as for the antioxidant enzyme assays. Statistical analysis Two-way ANOVA with post hoc contrasts was used to compare groups.The level of significance was set at P<0.01. Results The results for sham-operated rats did not differ from those obtained in controls.Thus, the results for adrenodemedullated rats were compared with those of sham-operated rats only.The content of TBARs (Table 1) was diminished by adrenodemedullation in the lymphoid organs (mesenteric lymph nodes (28%), thymus (40%) and spleen (42%)) and gastrocnemius muscle (67%).In relation to the sites of production of reducing power, G6PDh activity (Table 2) was enhanced in the MLN (69%) and reduced in the spleen (28%) and soleus muscle (75%) due to removal of the adrenal medulla.CS activity (Table 3) was lowered in all tissues (MLN (75%), spleen (71%), gastrocnemius (61%) and soleus (43%)), except in the thymus which displayed an increment of 56% as a consequence of the absence of the adrenal medulla. The antioxidant enzyme activities were also markedly changed by the removal of the adrenal medulla.Cu/Zn-SOD activity (Table 4) was increased in the MLN (126%), thymus (223%), spleen (80%) and gastrocnemius muscle (360%) and was reduced in the soleus muscle (31%).Removal of the adrenal medulla decreased Mn-SOD activity in the MLN (67%) and spleen (26%) and increased it in the thymus (142%) (Table 5), whereas catalase activity was lowered in the MLN (76%), thymus (54%) and soleus muscle (47%) (Table 6).It is particularly noteworthy that in ADM rats the activity of glutathione peroxidase was not detectable by the method used for all tissues studied (Table 7). Discussion Whether epinephrine plays a role for the establishment of an impaired immune response as reported for several modalities of stress is an intriguing point.In this study, changes in the oxygen metabolism of the lymphoid organs (mesenteric lymph nodes, thymus and spleen) were investigated by the removal of the adrenal medulla.This experimental procedure was chosen in order to avoid possible misinterpretation of the results that may occur in pharmacological experiments (e.g.administration of adrenergic drugs).The results were compared with those obtained for skeletal muscle.Taken as a whole, the effect of adrenodemedullation on TBARs content and enzyme activities in the lymphoid organs was not markedly different from that observed in the skeletal muscles. The removal of adrenal medulla reduced the content of TBARs as shown in Table 1.Therefore, circulating catecholamines may play a role in the lipid peroxidation process.Catecholamines may increase the content of TBARs due to stimulation of the activities of enzymes involved in the sites of production of the reducing power required for NADPH oxidase activity (pentose-phosphate pathway and Krebs cycle) or by inhibition of antioxidant enzyme activities.These possibilities were examined in the present study.The removal of the adrenal medulla produced tissue-specific effects on G6PDh activities.This enzyme can provide a qualitative index of NADPH production via the pentose-phosphate pathway.There was an increase in G6PDh activity in the MLN and a reduction in the soleus muscle and spleen.However, adrenodemedullation provoked a marked decrease of CS activity in all tissues studied except the thymus (Table 3).This decrease was particularly marked for the MLN and spleen, suggesting a lower flux through the TCA cycle in cells of the lymphoid tissues, in the absence of adrenaline (26).Lymphoid cells may compensate for this change by increasing ATP synthesis via anaerobic glycolysis while also decreasing flux through the pentose-phosphate pathway (as a consequence of maximizing carbon flux through the ATP generating reactions of glycolysis).In fact, in our previous study it was found Adrenodemedullation also markedly affected the activities of anti-oxidant enzymes.In summary, among the enzymes studied, the absence of the adrenal medulla raised the activity of Cu/Zn-SOD (Table 4), diminished that of catalase (Table 5) and abolished that of glutathione peroxidase (Table 7).The findings of a decrease in catalase activity and abolition of glutathione peroxidase activity due to adrenodemedullation led us to speculate that epinephrine physiologically regulates the activities of these enzymes in the lymphoid organs.In fact, there is evidence that epinephrine activates glutathione peroxidase in the heart, liver and kidney (35).Whether this effect is mediated by the increase in the concentration of oxygen reactive species (36) or is directly caused by the action of the hormone itself remains to be determined. The present observations are consistent with the possibility that epinephrine might play a role in the oxidative stress of the lymphoid organs.Whether this fact represents an important mechanism for the establishment of impaired immune function during stress remains to be elucidated.that adrenaline markedly stimulates glucose consumption and lactate production in incubated macrophages (34).The combined reduction in TCA cycle and pentose-phosphate pathway activity will reduce NADPH generating capacity.This may explain the observed reduction in TBARs concentration in the lymphoid organs and gastrocnemius muscle of ADM rats (Table 1).Epinephrine may stimulate metabolic pathways for NADPH production, which could increase TBARs concentration in the tissues.In addition, we have evidence that epinephrine can greatly stimulate macrophage H 2 O 2 production via cAMP within one hour of addition of the hormone (23).Thus, epinephrine may increase NADPH production by allosteric (in this example) as well as by transcrip- Table 1 - Content of thiobarbituric acid-reactive substances (TBARs) in the lymphoid organs and muscles of adrenodemedullated (ADM), sham-operated and control rats. Table 2 - Glucose-6-phosphate dehydrogenase (G6PDh) activity in the lymphoid organs and muscles of adrenodemedullated (ADM), sham-operated and control rats.Values are reported as means ± SEM for 8 rats in each group.*P<0.01 for comparison between ADM and sham-operated rats (Student t-test).MLN, Mesenteric lymph nodes; GC, gastrocnemius white portion. Table 3 - Citrate synthase (CS) activity in the lymphoid organs and muscles of adrenodemedullated (ADM), sham-operated and control rats. Table 7 - Glutathione peroxidase (GPX) activity (x10 3 ) in the lymphoid organs and muscles of adrenodemedullated (ADM), sham-operated and control rats.Values are reported as means ± SEM for 8 rats in each group.*P<0.01 for comparison between ADM and sham-operated rats (Student t-test).MLN, Mesenteric lymph nodes; GC, gastrocnemius white portion; ND, not detected.
2,976.4
1998-06-01T00:00:00.000
[ "Biology", "Medicine" ]
Critical Review: Current Research Issues on Crypto-currency and its Application in Financial Sectors This research article is based on the current research issues on crypto-currency and its application in financial sectors. The researcher stated that cryptocurrency attracted a significant attention and has been adopted in numerous applications, such as smart grid and Internet of Things (IoT) have a high scalability barrier for block-chain which limits its ability to support services with frequent transactions. In this research paper the researcher emphasized that some of the significant research issues on decentralized, peer to peer financial networks face risks such as fraud and suspicious trading, volatile price fluctuations, regularity uncertainty, and instability in financial sectors. This research study shows the huge growth of online users, virtual word concept which are created a new business phenomenon to facilitate financial activities such as buying, selling and trading. This research article analyzed the user’s expectation of future of crypto-currency in financial sectors at different significant level. In addition to this the researcher also discussed a statistical report which is adopted from the Statistic to provide a future direction of money regulatory in financial sector. Introduction Crypto-currency is am internet-based medium for exchanging transaction valuation in financial sectors which cryptographically functions to conduct financial transactions. Crypt currency leverages block chain technology to gain decentralized, transparency, and immutability. In this research article the research emphasized that some of the facts about crypto-currency which are stated as: 1. Crypto-currency is not controlled by any central authority. The decentralized nature of the block chain makes crypto-currency theoretical immune to the old age of government and interferences. 2. Crypto-currency is an alternative approach of exchanging the money valuation between transactions in financial sectors. 3. Crypto-currency can be sent directly between parties in financial sectors, the transaction can be done with minimal processing fees which are allowed to avoid the steep fees charged by the traditional financial sectors. 4. The crypto-currency have become a global phenomenon known to most people who need to know about crypto currency and the sheer that they can bring into global economy system. What Is Crypto-Currency? In this research article the researcher emphasized crypto-currency is one the digital payment systems which have a limited entries in a database and can exchange without fulfilling specific conditions. In current scenario the money is all about a verified entry in some kinds of account, balance, and transactions. The crypto-currency is defined as internet based medium of exchanging which uses crypto-graphically functions to conduct financial transaction in real world. The crypto-currency leverage block chain technology to gain decentralization, transparency and immutability. The researcher focused on that mechanism ruling the database of crypto-currency with respect to bit coin consisted of networks during transactions way. In crypto-currency every peer has a record of complete history of all transactions and thus for balance of every accounts [5]. In crypto-currency, a transaction is a file that it is encrypted with public and private keys, after signed a transaction is broadcast in the networks sent from one peer to every other peer in the network on the basis of P2P technology. Yli-Huumo et al. (2016) stated that block chain is a decentralized technology and developed first for bit con-crypto-currency for storing and managing the business transaction in smooth and secure way. In this research paper the researcher conducted a systematic mapping study with the goal of collecting all relevant research on block chain technology. The majority of the research is focusing on relieving and improving limitations of block chain from privacy and security concerns. Related Works Corina Sas and Irni Eliana Khairuddin (2015) emphasized that bit coin is a crypto-currency is a digital money which are completely differs in several ways of traditional us of money. It does not require digital wallets IDs. In this research paper the researcher stated that crypto-currency is the model of trust which is proposed to explore trust challenges raised by the bit coin technology. Iuon-Chang Lin and Tzu-Chun Liao (2017) Adrian et al. (2015) emphasized that crypto-currency has been regarded as a phenomenon and extended to rapid growth by huge swings to control the business transactions in financial sectors. Fang Dai et al. (2018) stated that crypto-currency is a digital payments which is based on decentralized distributed database system for data management solutions. The researchers also emphasized that this is one of the most trustable technology which are providing security, anonymity and data integrity without the need of any third party. Kimchai Yeow et al. (2019) emphasized that Internet of Things (IoT) is geared towards number of devices edge centric computing to offers high bandwidth, low latency, and improved connectivity. Vovchenko et al. (2017) emphasized that crypto-currency is one of the category of virtual currencies which given to development of risk and threats to national security and money laundering cases, criminal money and terrorist funding. The researcher suggested that with minimal negative impact of crypto-currency adopted for management centre in virtual currency infrastructure. Chinmay A. Vyas, Munindra Lunagaria (2014) focused on the unique characteristics of Bit-coin as a crypto-currency and major security issues regarding the mining process and transaction process of bit coin. The researcher emphasized that security is one of the major concern for all transactions for exchanging money to control their business operations in financial sectors. Bela Gipp et al. (2015) discussed on process of improving the crypto-currency at given point in time for trusting web based services that uses the decentralized distributed database. The block chain technology chain to store anonymous, tamper proofs timestamp for digital consent. Ruizhe Yang et al. (2019) stated block chain as the underlying technology of crypto-currency which is having a significant impact on business transactions such as smart grid, Internet of Things. The scalability of block chain technology enhancement, self -organizations, integrations, resource management and wide spread development. Kirillova, et al. (2018) examined the legal nature of bit coin, life coin, web money, ripple and virtual currencies. The researcher stated that licensing of bit coin in digital currency is playing a significant role in financial sectors. This research study is conducted on mining crypto-trading at international levels to prevent the abuse of virtual currencies for money laundering and terrorism financing. Karl Sigler (2018) emphasized that crypto-currency is an esoteric experiment to one of the hottest topics in finance and technology field. Crypto-currency is jacking for target machine which is infected with mining malware. A. Yu. Simanovskiy (2018) emphasized that the economic nature of crypto-currency, risks that arise from its use of electronic turn over servicing and consequences of different opinions for portable legalizations. The major way of crypto-currency existences is Ponzi scheme for limiting negative economic and social cost of cryptocurrency use. Irwin and Milad (2016) emphasized that crypto-currency, more specifically on terror funding, new way of payment technology and value transfer system. The researcher also emphasized that the present role of crypto-currency for transfer system to facilitate the funding, planning, seamless and its significant implementation to protect terror attacks. Malhotra, Yogesh (2013) stated that technical focus cryptographic proof of work in the context of virtual crypto-ijbm.ccsenet.org International Journal of Business and Management Vol. 15, No. 3;2020 currencies which is based on natural stage in the evolution of global finance to ensure its integrity for reliable medium exchange in financial sectors. Danny Bradbury (2013) worked on extensively on the project to open source community subject to attacks on numerous occasions, and is in danger zone in financial sectors to minimize interferences for criminal to have found a way to subvert. Adam Abdullah and Rizal Mohr Nor(2018) provided a conceptual framework of crypto-currency for purchasing power of money and to measure the comparative performance of price stability , which retains in store of value in the terms of monetary performance and price stability in financial sectors. Casinoa Thomas et al. (2019) provided a systematic review of block chain technology and its application to highlights on disruptive technology of streaming continuous expanding block chain technology. Remy Remigius Zgraggen (2019) focused on the crypto-currency based insurance contact for the relevant legal frameworks for public and private laws. Jamal Bouoiyour, Refk Selmi and Aviral Kumar Tiwari (2015) stated that crypto-currency is having one of the significant approach financial sectors to control secure and reliable transactions. Problem Statement and Research Issues The researcher emphasized that scalability is one of the significant barrier in block chain technology for exchanging transaction and money value for their business. Security is also one of the major concern in decentralized data storage of the networks. The researcher stated that some of the current research issues and its significant impact in financial sectors. For making more reliable and enhance computing services over internet, the encryption is providing a secure transaction in financial sectors. In terms of secure and reliable transactions, the crypto-currency is one of the significant components for exchanging money value and their significant approach for smooth transactions. The researcher stated that some of the significant research issues on crypto-currency and it application in financial sectors. 1. To study the various security factors of crypto-currencies in financial sectors. 2. To study the current research issues on crypto-currencies and its significant application in financial sectors. Research Design and Methodology This research paper is designed to identify the current research issues on crypto-currency and its significant application in financial sectors. The researcher developed a proposed research model to emphasize that the security barriers in crypto-currency due to decentralized its data structure, a block of data is travelling over the internet during data transactions in financial sectors. The research collected data on current research issues from research portals, books, magazines, and other sources of data collections and developed a critical review on current research issues and its significant approach in financial sectors. During this research study the research found some of significant research issues on crypto-currency. The researcher designed this research article by the shortage of comprehensive reviews on decentralized consensus system for edge centric Internet of Things (IoT) which controls such as data structure, scalable consensus systems models such as vulnerable wallets, when it comes into hacking attacks and theft, hackers and cyber-attacks. Statistical Report on Crypto Currency In this research article, the researcher presented a statistical report analysis in between 2013 to 2018 of market capitalization of crypto-currencies in U.S. Dollars. The researcher found the different statistics on crypto-currency with respect to security challenges and its significant usage in financial sectors. While Bitcoin attracted a growing following in subsequent years, it captured significant investor and media attention in April 2013 when it peaked at a record $266 per bitcoin after surging 10-fold in the preceding two months. Bitcoin sported a market value of over $2 billion at its peak, but a 50% plunge shortly thereafter sparked a raging debate about the future of crypto currencies in general and Bitcoin in particular. So, will these alternative currencies eventually supplant conventional currencies and become as ubiquitous as dollars and euros someday? Or are crypto currencies a passing fad that will flame out before long? The answer lies with Bitcoin. Research Issues on Crypto Currency In this research article, the researcher emphasized that crypto currencies has a decades-long pedigree in academia, but decentralized crypto currencies (starting with Bitcoin in 2009) have taken the world by storm. Aside from being a payment mechanism "Native to the Internet," the underlying block chain technology is touted as a way to store and transact everything from property records to certificates for art and jewelry. However, Researchers have embraced crypto currencies with gusto and have contributed important insights. 1. Anonymity, Privacy, and Confidentiality: Anonymity in crypto currencies is a matter of not just personal privacy, but also confidentiality for enterprises. Each transaction is accompanied by a cryptographic, publicly verifiable proof of its own validity. Roughly, the proof ensures that the amount being spent is no more than the amount available to spend from that address. 2. Endpoint Security:Turning to security, the Achilles' heel of crypto currencies has been the security of endpoints, or the devices that store the private keys that control one's coins 3. Smart Contracts: One of the hottest areas within crypto currencies, so-called smart contracts are agreements between two or more parties that can be automatically enforced without the need for an intermediary. 4. Overcoming the Pitfalls: Crypto currencies implement many important ideas: digital payments with no central authority, immutable global ledgers, and long-running programs that have a form of agency and wield money. As researcher seen, there are pitfalls for the unwary in using and applying crypto currencies: privacy, security, and interfacing with the real world. 5. Deep Learning: DNNs (deep neural networks) have evolved to a state-of-the-art technique for machine-learning tasks ranging from computer vision to speech recognition to natural language processing. Deep-learning algorithms, however, are both computationally and memory intensive, making them power-hungry to deploy on embedded systems. Running deep-learning algorithms in real-time at sub watt power consumption would be ideal in embedded devices, but general-purpose hardware is not providing satisfying energy efficiency to deploy such a DNN. 6. Optimized Data Flow: Deep-learning algorithms are memory intensive, and accessing memory consumes energy more than two orders of magnitude more than ALU (arithmetic logic unit) operations. This is realized by exploiting local data reuse of filter weights and feature map pixels (i.e., activations) in the high-dimensional convolutions, and by minimizing data movement of partial sum accumulations. ijbm.ccsenet.org International Journal of Business and Management Vol. 15, No. 3;2020 Application of Crypto-Currency in Financial Sectors The researcher stated that some of the significant usage of crypto-currencies in financial sector which is defined as crypto currency is a digital asset designed to work as a medium of exchange that uses strong cryptography to secure financial transactions, control the creation of additional units, and verify the transfer of assets that uses cryptography for security" making it difficult to counterfeit. Since it is not issued by a central authority, governments. 1. Fraud-Proof: When crypto currency is created, all confirmed transactions are stored in a public ledger. All identities of coin owners are encrypted to ensure the legitimacy of record keeping. Because the currency is decentralized, you own it. Neither government nor bank has any control over it. 2. Identity Theft: The ledger ensures that all transactions between "digital wallets" can calculate an accurate balance. All transactions are checked to make sure. 3. Instant Settlement: Block chain is the reason why crypto currency has any value. Ease of use is the reason why crypto currency is in high demand. All you need is a smart device, an internet connection and instantly you become your own bank making payments and money transfers. 4. Accessible: There are over two billion people with access to the Internet who don't have rights to use to traditional exchange systems. The Future Dimensions of Crypto Currency The researcher emphasized that the future of crypto-currency in financial sectors and its significant usage which will be harder to surmount is the basic paradox that bedevils crypto currencies which require more regulation and government scrutiny are likely to attract, which erodes the fundamental premise for their existence. The researcher analyzed that some economic analysts predict a big change in crypto is forthcoming as institutional money enters the market. Moreover, there is the possibility that crypto will be floated on the NASDAQ, which would further add credibility to block chain and its uses as an alternative to conventional currencies. Some predict that all that crypto needs are a verified exchange traded fund (ETF). An ETF would definitely make it easier for people to invest in Bit coin, but there still needs to be the demand to want to invest in crypto, which some say may not automatically be generated with a fund. Conclusion Finally, the researcher concluded that crypto-currency is playing one of the significant roles for business transaction for smooth transactions in financial sectors. In this research article the researcher stated that to avoid fraud and hackers attacks with respect to safe guard and protection. Decentralization is also one the security concern data management in financial sectors without being tax evasion, money laundering and other activities. In currency scenario the most popular crypto-currency that fall in between heavily regulated and fiat currencies which decided to intervene and warn that bit coin could be the next enable bubble. The government prohibited the trading of crypto-currency due to lack of control and market unpredictability. The centralization of hashing power in the hands of a few or in particular geographic areas which are facing main challenges
3,838.8
2020-02-28T00:00:00.000
[ "Business", "Computer Science", "Economics" ]
Insights on the Organ-Dependent, Molecular Sexual Dimorphism in the Zebra Mussel, Dreissena polymorpha, Revealed by Ultra-High-Performance Liquid Chromatography–Tandem Mass Spectrometry Metabolomics The zebra mussel, Dreissena polymorpha, is extensively used as a sentinel species for biosurveys of environmental contaminants in freshwater ecosystems and for ecotoxicological studies. However, its metabolome remains poorly understood, particularly in light of the potential molecular sexual dimorphism between its different tissues. From an ecotoxicological point of view, inter-sex and inter-organ differences in the metabolome suggest variability in responsiveness, which can influence the analysis and interpretation of data, particularly in the case where males and females would be analyzed indifferently. This study aimed to assess the extent to which the molecular fingerprints of functionally diverse tissues like the digestive glands, gonads, gills, and mantle of D. polymorpha can reveal tissue-specific molecular sexual dimorphism. We employed a non-targeted metabolomic approach using liquid chromatography high-resolution mass spectrometry and revealed a significant sexual molecular dimorphism in the gonads, and to a lesser extent in the digestive glands, of D. polymorpha. Our results highlight the critical need to consider inter-sex differences in the metabolome of D. polymorpha to avoid confounding factors, particularly when investigating environmental effects on molecular regulation in the gonads, and to a lesser extent in the digestive glands. Introduction Ecotoxicological and biosurvey studies often propose the use of biological markers or specific endpoints measured in model organisms as an indication of exposure to natural or anthropogenic substances.Thus, one great challenge of ecotoxicology and stress ecology research is to find relevant molecular signatures of specific stressors in sentinel organisms and to understand their physiological effects on biota.Metabolomics is a sensitive, currently emerging, high-throughput approach for investigating metabolites of low molecular weight (<1500 Da), often using nuclear magnetic resonance (NMR) or liquid/gas chromatography combined with tandem mass spectrometry (LC-MS/MS or GC-MS/MS).Metabolomics analysis allows researchers to describe the metabolite profile of an organism and determine its involvement in dynamic cellular processes to obtain an instantaneous fingerprint of the physiological state of the organism.Metabolomics applied to an organ or an entire organism constitutes one of the most reliable methods of chemical phenotyping for investigating the homeostatic responses to environmental stresses from multiple origins [1][2][3] in organisms like marine or freshwater bivalves [4][5][6][7].Metabolomics greatly helps in characterizing the molecular effects of different stressors on metabolic pathways and increasing our understanding of the mechanism of impairment.Metabolomic investigations are independent of the background genomic dataset of the model organism and therefore provide data applicable to all organisms. The freshwater zebra mussel, D. polymorpha, is a filter-feeding dreissenid mussel native to the Ponto-Caspian regions.This sessile bivalve lives mostly on hard substrates and is present in a wide range of habitats, from freshwater lakes and rivers to brackish estuaries [8].D. polymorpha has been considered a widespread and invasive species [9][10][11][12].It has colonized Western Europe and North America, developing large populations because of its high growth rate, and endangering freshwater biota and ecosystems.However, the ecological pressure of D. polymorpha has tended to decrease during the last few years concomitant with smaller populations and individual body sizes, allowing the benthic populations of native species to regain their competitiveness and return to pre-invasion densities [13].Conversely, some ecological characteristics of this species such as its water purification capabilities and influence on rates of nutrient cycling are beneficial in mitigating the harmful effects of eutrophication [14]. D. polymorpha is also extensively used in laboratory experiments and as a bioindicator species for water pollution because of its great abundance, large repartition area, limited mobility, continuous filtering activity, high bioaccumulation potential and ease of handling [15][16][17][18][19][20][21].Its wide food size range combined with filtering capacities ranging from 5 to 400 mL/mussel per hour allows D. polymorpha to be exposed to various anthropogenic or natural pollutants through direct gill absorption, ingestion of contaminated food or of particles on which pollutants may be adsorbed [22][23][24].Interest in the zebra mussel as a bioindicator of environmental pollutants, pathogens or natural toxins such as metals, microplastics, organochlorine contaminants, cyanotoxins and parasites, and as a model organism in ecotoxicology studies, has been widely demonstrated [21,[25][26][27][28][29][30][31][32][33][34][35].D. polymorpha has been suggested as the freshwater counterpart of the marine mussel Mytilus sp. in biomonitoring and ecotoxicological studies [24].It has been used as a sentinel organism of water quality in the Great Lakes since the mid-1970s in the Mussel Watch program for water quality monitoring [36,37].Its remarkable bioaccumulation capacities, legal ecological status allowing in situ collection without limitation, and ease of transplantation in cages have stimulated great interest in using this organism in water quality management plans [38,39]. An NMR investigation of D. polymorpha [40] has underlined the usefulness of the metabolomic approach in comparison to the measurement of core biomarkers for identifying metabolites of interest in ecotoxicological investigations [38].However, to enhance the ecotoxicological relevance of the molecular biomarkers, the reference metabolic conditions that differ among organs, developmental stages, and between the sexes must be clarified [41].In marine mussels, various gender-and tissue-specific metabolome differences have already been reported, reflecting the specificities of the physiological responses to different stressors depending on which sex and organ are considered [7, [42][43][44][45][46][47].Therefore, attempting to define the corresponding baseline in terms of inter-sex and inter-organ metabolite concentrations may help researchers understand the individual variability observed in some ecotoxicological studies [48].The aim of this study was to assess the specific metabolite fingerprints of various tissues (digestive gland, gonad, gills, mantle) in both males and females of D. polymorpha through a non-targeted metabolomic approach using ultra-high performance liquid chromatography-electrospray ionization-tandem mass spectrometry (UHPLC-ESI-MS/MS). Biological Model of D. polymorpha To ensure an accurate comparison of the metabolomes of different tissues and genders of organisms, it is necessary to use individuals as similar as possible from the same population and of similar age and size.Thus, organisms were obtained from a single reference site (Lac-du-Der-Chantecoq 48 • 36 07.7 N; 4 • 44 37.0 E) and were selected according to metamorphosis on a hard substrate, a process that yielded a group of similar ages.For this study, organisms about 18 months of age (January 2021) were selected at a size of 25 ± 2 mm.Groups of 40 individuals were placed in aerated 3 L tanks (six aquaria) containing a 1:1 ratio of water from the sampling site and Cristalline ® spring water (Saint Yorre, France).After collection from the field, they were gradually acclimated to lab conditions of 16 ± 2 • C with a 12 h:12 h light:dark cycle for up to seven weeks.During the acclimation, the mussels were fed daily with 2.5 µL per individual of a dietary concentrate containing the microalgae Nannochloropsis salina (Nanno 3600 ® Planktovie, Marseille, France).After the acclimation, individuals were randomly sampled from the six aquaria and sexed by gametes withdrawn from the gonads using a 1 mL syringe to obtain 12 males and 12 females.The mussels were anaesthetized for dissecting, and the digestive glands, gonads, mantle and gills were removed, weighed, individually snap frozen in liquid nitrogen and stored at −80 • C. Extract Preparation from Mussel Tissues and Metabolome Analysis by Mass Spectrometry LC-MS grade acetonitrile, methanol, and formic acid were obtained from Carlo Erba (Val-de-Rueil, France).A standard solution of Na formate (purchased from Sigma-Alrich, Saint-Quentin-Fallavier, France) was freshly prepared with Ultra-pure MilliQ ® water (Guyancourt, France).Analytes were extracted from the chosen tissues of D. polymorpha with weights varying from 22 to 130 mg depending on the type of tissue.The amount of solvent was adjusted in order to keep a ratio of 1 mL of 75%:25% UHPLC methanol: water per 100 mg of fresh tissue.Tissues were mechanically ground (GLH850 OMNI) and sonicated with an ultrasonic probe (Sonics VibraCell, 130 W, 20 kHz, 60% amplitude) for 30 s to release intracellular metabolite contents.Samples were then centrifuged at 15,300× g for 10 min at 4 • C. Supernatants (2 µL) were analyzed by UHPLC (Elute, Bruker or Ultimate 3000, Thermo, Waltham, MA, USA) on a PolarAdvance-II C18 column (2.5 µm pore-size) (Thermo, Waltham, MA, USA) at a 300 µL•min −1 flow rate with a linear gradient of acetonitrile in 0.1% formic acid (5-90% in 16 min).Analytes were subsequently ionized and analyzed using an electrospray ionization hybrid quadrupole time-of-flight high-resolution mass spectrometer (ESI-Qq-TOF Compact, Bruker, Bremen Germany, France) at a speed of 2 Hz over a range of 50-1500 m/z on positive simple MS mode and then on broad-band collision ion dissociation or positive autoMS/MS mode at a speed of 2-8 Hz over the 50-1500 m/z range with information-dependent acquisition. The list of peaks (MS/MS spectra within 1 and 15 min of the LC gradient) was generated from recalibrated MS spectra (<0.5 ppm, with an internal calibrant of Na formate) with filtering of 5000 counts of minimal intensity and a minimal occurrence in at least 10% of all samples.All classical adducts ([M + H]+, [M + 2H]+, [M + 3H]+, [M + Na]+, [M + K]+, and [M + NH4]+) and related isotopic forms were searched and grouped together using a threshold value of 0.8 for the co-elution coefficient factor with MetaboScape 4.0 software (Bruker, Bremen, Germany).All data were acquired from the same single LC-MS run.Data QC and Blank samples (injected every 6 injections) were examined in order to ensure the reproducibility and robustness of the whole data series.Data quality in terms of intensity, retention time and mass drift of ions were carefully inspected and recalibration was automatically performed individually by the software on raw data of all samples using internal standards (reference Na formate solution) injected at the beginning of every sample acquisition.Different states of charge and adducts were grouped together and the area under the peak was determined to generate a unique global data matrix containing semi-quantification results for each analyte in all analyzed sample peaks for each analyte (characterized by the respective mean mass of its neutral form and its corresponding retention time). Data Treatment, Statistical Analyses, and Annotation by Molecular Network The two data tables with quantification of area under the peak for the different analytes (one with all analytes and one with only the annotated analytes) were evaluated using MetaboAnalyst 5 (www.metaboanalyst.ca(accessed on 5 June 2021)) for Pareto's normalization, univariate group variance analyses (ANOVA and t-test with Benjamini FDR calculation), multivariate statistical methods (unsupervised principal component analysis, PCA) and data representation by heatmap with hierarchical clustering based on Euclidean distances (https://www.metaboanalyst.ca(accessed on 5 June 2021)).Quantifications of annotated analytes and of all analytes were also compared by gender and between tissues using PERMANOVA analyses applied on PCA in the vegan R package (vegan: community ecology package; https://cran.r-project.org/web/packages/vegan/index.html(accessed on 20 July 2021)).Pairwise comparisons were carried out with the pairwise Adonis R package.The significance threshold was set at p < 0.05.Analyte annotation was attempted using combined ion annotation with the MetaboScape software based on mass and isotopic pattern accuracy and with MetGem molecular networking based on the presence of ions in certain molecular clusters for which substantial annotation could be retrieved according to the MS/MS fragment occurrence.Previously uncharacterized analytes belonging to annotated metabolite clusters were considered potential analogue molecules. Analysis of the Global Metabolome of Tissues from D. polymorpha Males and Females A total of 2634 analytes, annotated and not annotated, from untargeted LC-MS metabolomics of the digestive gland, gills, mantle and gonads showed that the metabolome of each tissue appeared to significantly differ from that of other tissues (PERMANOVA p < 0.01, Table 1A).The heatmap with hierarchical clustering revealed first that the metabolic signatures could be discriminated between the four tissues and second between males and females, with only limited intra-group individual variability (n = 12 replicates per tissue and per sex, Figure 1).The molecular profile of the gonad and of the digestive gland significantly discriminated between males and females.The sexual dimorphism of the gonad metabolome was more distinct than that of the digestive gland (PERMANOVA p < 0.01; PAIRWISE PERMANOVA p < 0.01 and p < 0.05, respectively, Table 1B).In the literature, clear differences in metabolomes between tissues and between the sexes have also been observed in the marine mussel Mytilus galloprovincialis [7,47].The metabolic signature of the gonads highlighted a clear sexual dimorp represented in the PCA (Figure 2A1 discrimination along the y-axis).Similar sex differences in the metabolome of mussel gonads have already been reported [47].plots, in which individual analytes were plotted according to their respective rela change between males and females and their t-test and p-values, indicated that am 2548 analytes (annotated and not annotated) observed in the gonads, 21% significantly different concentrations between males and females, with 24 significantly lower and 299 significantly higher in females compared to males 2A2).PCA performed with the analytes from the digestive gland reveale discrimination of metabolomes of males and females along the second compon (Figure 2B1), but weaker than the one observed in the gonads.The volcano plots that among the 2567 analytes detected in the digestive glands, 6% had concentrations between the sexes, with 33 having lower concentrations and 12 The metabolic signature of the gonads highlighted a clear sexual dimorphism, as represented in the PCA (Figure 2A1 discrimination along the y-axis).Similar sex-specific differences in the metabolome of mussel gonads have already been reported [47].Volcano plots, in which individual analytes were plotted according to their respective relative fold change between males and females and their t-test and p-values, indicated that among the 2548 analytes (annotated and not annotated) observed in the gonads, 21% showed significantly different concentrations between males and females, with 245 being significantly lower and 299 significantly higher in females compared to males (Figure 2A2).PCA performed with the analytes from the digestive gland revealed slight discrimination of metabolomes of males and females along the second component axis (Figure 2B1), but weaker than the one observed in the gonads.The volcano plots showed that among the 2567 analytes detected in the digestive glands, 6% had different concentrations between the sexes, with 33 having lower concentrations and 124 higher concentrations in females than in males (Figure 2B2).No obvious sexual dimorphism was observed in the gills by PCA (Figure 2C1) and only 4.2% of the 2530 analytes presented differences in concentration between the two sexes (Figure 2C2).In the mantle, no clear sexual dimorphism was observed by PCA (Figure 2D1), but 50 of the 2301 detected analytes had lower concentrations and 115 higher concentrations in females than in males, representing a 7.1% sex-discriminated pattern (Figure 2D2).Interestingly, in three out of the four analyzed tissues, females showed significantly higher concentrations of analytes than males (299 vs. 245 in the gonad, 124 vs. 30 in the digestive gland, and 115 vs. 50 in the mantle).Overall, female organs exhibited higher analyte contents than males, suggesting that more intense and/or more complex metabolic activities may occur in females, potentially in relation to their higher reproductive effort resulting in a richer integration of reserves in female oocytes than in spermatozoids. The molecular networks were identified using the GNP algorithm from MetGem software with the 579 MS/MS spectra of the most concentrated analytes detected in the gonads, digestive glands, gills, and mantle of D. polymorpha males and females (Figure 3).The appearance of analytes with common MS/MS fragments within the same clusters connected analytes sharing a structural similarity based on their respective fragmentation patterns (cosine score < 0.7).Among all the analytes, MetGem aligned some in parallel with annotated metabolites from public databases based on accurate mass correspondence and sharing of MS/MS fragments.Thus, based on correspondence with both the mass and the isotopic MS/MS fragmentation pattern, we were able to annotate various analytes as genuine or potential phospholipids such as lysophosphatidylcholine (LPC) principally found in the digestive glands, gills and gonads of males and females (Figure 3).Analytes annotated as lysophosphatidylethanolamine (LPE) were also abundant and located mainly in the digestive gland and the gills, and to a lesser extent in the gonads; phosphatidylethanolamine (PE) was also found at relatively high concentrations.Previous studies also reported the presence of LPE, LPC and PE in freshwater molluscs, with the major phospholipid classes being the choline-containing PC and the amine-containing PE, constituting around 50% of the total [51,52].In D. polymorpha, LPE and PE constituted 47% of the total annotated phospholipids [51].LPC is a structural lipid produced by the digestive gland, derived from phosphatidylcholine and used to build cell membranes.LPE is also a membrane component but is additionally implicated in cell signalling and enzyme activation, whereas PE is particularly abundant in the internal layer of the cellular membrane.Different saccharides were also abundant in male and female organs like the digestive gland, the gonads and the gills (Figure 3B).Saccharides are primary metabolites associated with numerous biological processes and functions in molluscs, such as growth, reserve storage, tissue architecture, immunity, energetic metabolism, and energy storage; they form a major part of mollusc tissue extracts [53].Saccharides present a large structural variability depending on mollusc species and organ [53], as also suggested by the present study showing different molecular clusters of saccharides specific to the gonads (blue) compared to the gills (green) of D. polymorpha (Figure 3B).The specificity of saccharides within the gonads was especially marked in females, as seen in the molecular network diagram (Figure 3B). Interestingly, the molecular network representation also revealed that glutathione was predominantly present in the gonads, with higher concentrations in females (Figure 3).Glutathione (GSH) is a ubiquitous antioxidant tripeptide implicated in detoxification and cellular homeostasis.Previous studies reported that gonadal tissues of bivalves contained high GSH concentrations relative to other tissues, particularly during reproductive periods.GSH helped to maintain the health of the gonads and protected the gametes from oxidative damage during fertilization and development [54,55].A high GSH content in the gonads has been reported to ensure reproductive success in oysters, Crassostrea virginica by decreasing the susceptibility of gametes and embryos to metal toxicity [55].In vertebrates, a high GSH content in the gonads during oocyte maturation is a reliable indicator of oocyte viability, as it is essential for male pronucleus formation during fertilization and embryo pre-implantation, ensuring embryo development and preventing embryonic cellular apoptosis [56].The bivalves used in our study were at the onset of gametogenesis (January) [57], which may have influenced the GSH contents stored in the gonads for oocyte protection. Analysis of Annotated Analytes from Male and Female D. polymorpha Tissues The heatmap with hierarchical clustering of 198 putatively annotated analytes identified by MS/MS in the four organs of male and female mussels (Figure 4) revealed the same general discriminative patterns seen with the whole set of 2634 analytes previously observed (Figure 1).Globally, the metabolome of each tissue significantly differed from those of the other tissues (PERMANOVA p < 0.01, Table 2A).The heatmap also revealed that the metabolic profiles were first discriminated among the four tissue types and second between males and females (Figure 4).Considering only the 198 annotated analytes, no obvious sex-related differences were seen in the mantle and in gills, whereas a slight discrimination in the digestive gland was visually detectable on the heatmap, although it was not statistically significant (pairwise PERMANOVA p > 0.05, Table 2B).Only the gonads showed a clear molecular sexual dimorphism (pairwise PERMANOVA p < 0.05, Table 2B), but to a lesser extent than what was seen in the previous analysis (annotated and not annotated).In general, the distribution pattern of putatively annotated analytes appeared globally representative of the one observed with all analytes.Overall, the gonads presented the largest number of significantly discriminated analytes (n = 36) between males and females among the 198 putatively annotated analytes.All observed analyte classes were discriminated, especially those comprising various nucleic acids, lipids and amino acids (Figure 5).The highest number of discriminated analytes occurred in females, especially for amino acids and nucleic acids.For example, eight annotated amino acids among the nine significantly discriminated exhibited higher Overall, the gonads presented the largest number of significantly discriminated analytes (n = 36) between males and females among the 198 putatively annotated analytes.All observed analyte classes were discriminated, especially those comprising various nucleic acids, lipids and amino acids (Figure 5).The highest number of discriminated analytes occurred in females, especially for amino acids and nucleic acids.For example, eight annotated amino acids among the nine significantly discriminated exhibited higher concentrations in females than in males, mostly in the gonads (Figure 5). The digestive gland contained 31 significantly discriminated analytes, principally represented by lipids (equally abundant in males and females, depending on the molecule), saccharides, and nucleic acids (most abundant in males).Those analytes whose concentrations differed between the sexes were probably involved in gender-specific metabolic pathways.The gills and the mantle contained the lowest number of significantly sexually discriminated analytes (19 and 15, respectively) with higher concentrations of amino acids in females and higher concentrations of lipids in males in gills, and mostly higher concentrations of amino acids and structural lipids in female mantles.The observation that male and female mussels showed several sex-specific modulated metabolites suggests that they may have different protective mechanisms against various stresses.These sexual differences may result in substantial physiological changes, but could also influence measurable biomarkers or targeted toxicological endpoints and induce data variability, particularly if the gonads or digestive glands are investigated.Now, further studies on the metabolome of digestive glands and gonads of both sexes would be interesting to describe the metabolic differences and thus increase our understanding of sex-specific responses to specific pollutants and 2023, 13, 1046 11 of 17 their ecotoxicological consequences on natural populations.The genetic background of zebra mussel populations is already known to act as a confounding factor in studies of biomarkers, because of different individual responses to contamination [17,58].Therefore, population genetics, sex of the organism, and type of tissue studied should be carefully considered prior to an ecotoxicological or physiological investigation. Among the sexually differentiated lipids that were annotated in the present analysis, several were structural lipids belonging to LPE and LPC classes, which account for about half of the known phospholipids in freshwater bivalves [51].Among the top 25 most discriminated analytes regardless of tissue type, three LPCs were at higher concentrations in females, whereas two LPEs were at higher concentrations in males (Figure 5).These intrinsic biological differences in lipid content between male and female mussels may be associated with their specific needs during the reproductive cycle.This specificity would result in unique differences in their respective lipid regulation process induced by exposure to stress.A previous study performed on the marine mussel, Mytilus galloprovincialis, exposed to polluted effluents showed a gender-specific modulation of various LPCs and LPEs, with a general trend of up-regulation in males and down-regulation in females [59]. Among all tissues, but mainly in the gonads, the concentrations of five cholic acid derivatives, the putatively named methyltestosterone, norethisterone, estradiol, trimegestone, and 19-nor-5-androstenediol, and oxymesterone were higher in males (Figure 6).Many cholic acid derivatives were found in the top 25 most discriminated analytes regardless of tissue type (Figure 6).The presence of molecules related to the steroid hormones that play a critical role in vertebrate reproduction has been previously observed in molluscs [60,61].Their origin and function are still debated, but these potential sexual hormones are either thought to be absorbed through the diet or synthesized by the molluscs themselves from the steroid precursors, cholesterol or pregnenolone [62][63][64].Molluscs have been shown to share some steroidogenic and steroid metabolic pathways with vertebrates.Three steroids, progesterone, testosterone, and 17β-estradiol, have been proposed as functional hormones in gastropods and bivalves [65,66], including D. polymorpha [67,68].Such gender-specific differences in concentrations of endocrine-active metabolites may influence the response of D. polymorpha to contaminant exposure.However, it remains important to note that the putative annotation obtained from the present investigation cannot be considered as a genuine molecular identification, but rather provides insights into the molecular structure of these analytes as cholic-acid-related components, according to the annotations from spectral databases.Various isobaric cholic-acid-related derivatives can have the same mass and similar fragmentation patterns, and therefore cannot be differentiated in our present analysis because of the lack of specific spectral information for these molecules in public databases.Thus, the hypothesis that these gender-specific metabolites could be steroid compounds acting as potential sexual hormones or prohormones involved in steroid metabolic pathways in D. polymorpha remains to be confirmed. Our study showed that several lipids could be discriminated between male and female organs in the sexual maturation (pre-spawning) period, and we predict a quantitative or qualitative evolution of these sexual differences along the entire spawning cycle of the zebra mussel.The lipid content in dressenids has already been shown to fluctuate annually in synchrony with the gametogenic cycle, exhibiting high concentrations of lipophilic organic compounds in fully mature individuals during late spring and summer, and a decrease in the post-spawning period [69].isobaric cholic-acid-related derivatives can have the same mass and similar fragmentation patterns, and therefore cannot be differentiated in our present analysis because of the lack of specific spectral information for these molecules in public databases.Thus, the hypothesis that these gender-specific metabolites could be steroid compounds acting as potential sexual hormones or prohormones involved in steroid metabolic pathways in D. polymorpha remains to be confirmed.Our study showed that several lipids could be discriminated between male and female organs in the sexual maturation (pre-spawning) period, and we predict a quantitative or qualitative evolution of these sexual differences along the entire spawning cycle of the zebra mussel.The lipid content in dressenids has already been shown to fluctuate annually in synchrony with the gametogenic cycle, exhibiting high concentrations of lipophilic organic compounds in fully mature individuals during late spring and summer, and a decrease in the post-spawning period [69]. Sex hormone concentrations in some bivalves are also dependent on gonadal stage and seasonal variation, suggesting their possible role as endogenous modulators of gametogenesis [66,70].It would be worthwhile to characterize the sexual molecular dimorphism in zebra mussels along the entire reproductive cycle.This information might allow us to compensate for intrinsic confounding factors related to an organism's maturity cycle when performing specific analyses on males and females using sexually dimorphic organs like gonads and digestive glands of D. polymorpha.Sexual molecular dimorphism in D. polymorpha should also be investigated in a natural setting (caged), as it is presumed that environmental factors such as temperature, food availability, and pollutants may influence the (eco)toxicological response [29,68,69]. Conclusions A deeper knowledge of the sexual molecular dimorphism of a model animal may help strengthen ecotoxicological research through a better understanding of the observed variability of molecular responses to various environmental stressors.This study revealed a significant sexual molecular dimorphism in the gonads, and to a lesser extent in the digestive glands, of the ecotoxicological model organism D. polymorpha (zebra mussel).Such differences in the metabolome may influence biomarker responses to anthropogenic pollution or abiotic and biotic stresses and induce bias in interpreting data if the gonad or Sex hormone concentrations in some bivalves are also dependent on gonadal stage and seasonal variation, suggesting their possible role as endogenous modulators of gametogenesis [66,70].It would be worthwhile to characterize the sexual molecular dimorphism in zebra mussels along the entire reproductive cycle.This information might allow us to compensate for intrinsic confounding factors related to an organism's maturity cycle when performing specific analyses on males and females using sexually dimorphic organs like gonads and digestive glands of D. polymorpha.Sexual molecular dimorphism in D. polymorpha should also be investigated in a natural setting (caged), as it is presumed that environmental factors such as temperature, food availability, and pollutants may influence the (eco)toxicological response [29,68,69]. Conclusions A deeper knowledge of the sexual molecular dimorphism of a model animal may help strengthen ecotoxicological research through a better understanding of the observed variability of molecular responses to various environmental stressors.This study revealed a significant sexual molecular dimorphism in the gonads, and to a lesser extent in the digestive glands, of the ecotoxicological model organism D. polymorpha (zebra mussel).Such differences in the metabolome may influence biomarker responses to anthropogenic pollution or abiotic and biotic stresses and induce bias in interpreting data if the gonad or the digestive gland is targeted for analyses without regard to sex.Sexing individual mussels prior to sampling is difficult, but the risk of errors in data interpretation can be reduced by following physiological responses in gills or in the mantle for lowering inter-sex molecular dimorphism influences.Data provided by the present study may help in developing specific protocols and data interpretation for ecotoxicological or biosurvey investigations using D. polymorpha.Information on the various metabolite concentrations among organs from both sexes may also be a powerful tool for identifying new molecules of interest and increasing basic knowledge for further physiological or biomarker investigations.Funding: Financial support was obtained from recurrent sup-port lab the MCAM and SEBIO units from CNRS, MNHN and Reims Champagne-Ardenne University.The MS spec-tra were acquired at the Plateau technique de spectrométrie de masse bio-organique, Muséum National d'Histoire Naturelle, Paris, France. Institutional Review Board Statement: The model organisms used in this study do not benefit from any particular protection status, and there are currently no legal qualifications that must be met for their experimental manipulation in the laboratory.However, this did not exempt us from respecting the 3R's rule.Indeed, in our protocols, we have taken care to use only enough individuals for statistical analyses, and performed anaesthesia before vivisection. Informed Consent Statement: Not applicable. Figure 1 . Figure 1.Heatmap with hierarchical classification of the whole set of metabolites (n determined by mass spectrometry in the digestive glands (red), gills (green), gonads (da and mantle (light blue) of male (n = 12) and female (n = 12) D. polymorpha. Figure 2 . Figure 2. Principal component analysis (A1,B1,C1,D1) and volcano plots (A2,B2,C2,D2) of the metabolite data from gonads, digestive glands, gills, and mantles of male (n = 12, blue) and female (n = 12, red) D. polymorpha.The variance of the PCA is given on the axis of PC-1 and -2.In the volcano plots, the fold changes in intensity of individual metabolites between males and females are plotted on the x-axis (fold change ≥ 2) with the significance of the differences between males and females (p < 0.1) being determined by t-test (y-axis).The statistical significance for down-regulated metabolites is indicated in blue and for up-regulated metabolites in red for females compared to males (the statistically unvarying metabolites are shown in pale grey, p < 0.1). Figure 2 . Figure 2. Principal component analysis (A1,B1,C1,D1) and volcano plots (A2,B2,C2,D2) of the metabolite data from gonads, digestive glands, gills, and mantles of male (n = 12, blue) and female (n = 12, red) D. polymorpha.The variance of the PCA is given on the axis of PC-1 and -2.In the volcano plots, the fold changes in intensity of individual metabolites between males and females are plotted on the x-axis (fold change ≥ 2) with the significance of the differences between males and females (p < 0.1) being determined by t-test (y-axis).The statistical significance for down-regulated metabolites is indicated in blue and for up-regulated metabolites in red for females compared to males (the statistically unvarying metabolites are shown in pale grey, p < 0.1). Figure 3 . Figure 3. Molecular networks generated from 579 MS/MS spectra obtained from the gonads, digestive glands, gills, and mantle of D. polymorpha males (n = 12) and females (n = 12), using the GNPS and t-SNE tools.The diagram displays connected metabolites sharing a structural similarity based on the similarity of their respective fragmentation patterns.Annotated compounds are Figure 3 . Figure 3. Molecular networks generated from 579 MS/MS spectra obtained from the gonads, digestive glands, gills, and mantle of D. polymorpha males (n = 12) and females (n = 12), using the GNPS and t-SNE tools.The diagram displays connected metabolites sharing a structural similarity based on the similarity of their respective fragmentation patterns.Annotated compounds are indicated in the dashed-line circles.Relative concentrations of annotated compounds are indicated in (A) each tissue (in green for gills, in red in digestive glands, in blue in gonads and in turquoise in mantles), and (B) in each sex (in blue in males and in red in females). Figure 5 . Figure 5. List of most discriminated metabolites among the 198 annotated molecules between males and females according to their molecular family and to the different tissues.The metabolites with concentrations higher in females are shown in red, while those higher in males are shown in blue. Figure 5 . Figure 5. List of most discriminated metabolites among the 198 annotated molecules between males and females according to their molecular family and to the different tissues.The metabolites with concentrations higher in females are shown in red, while those higher in males are shown in blue.The darker the color the greater the fold change.Statistically significant results (ANOVA, p < 0.01) are indicated with a star. Figure 6 . Figure 6.The top 25 most discriminated metabolites among the 198 annotated molecules between D. polymorpha males and females.The metabolites expressed principally in males are shown in green and in females in red.Abundances (mean area of the MS peak ± SD, n = 12) of the top four metabolites in the digestive glands, gills, gonads, and mantle are depicted in box plots (green for males and red for females). Figure 6 . Figure 6.The top 25 most discriminated metabolites among the 198 annotated molecules between D. polymorpha males and females.The metabolites expressed principally in males are shown in green and in females in red.Abundances (mean area of the MS peak ± SD, n = 12) of the top four metabolites in the digestive glands, gills, gonads, and mantle are depicted in box plots (green for males and red for females). Author Contributions: Conceptualization and methodology, E.L., A.G. and B.M.; data acquisition and analysis, L.S., B.M. and P.F.; writing-original draft preparation, E.L.; writing-review and editing, B.M., A.G. and P.F.All authors have read and agreed to the published version of the manuscript. Table 1 . Results of PERMANOVA analyses applied to the MS/MS peak list of 2634 analytes for comparison of their relative quantity according to tissues (A) and sexes (B).Significant p-values are indicated in bold (p < 0.05). Table 2 . Results of PERMANOVA analyses of the MS/MS peak list of the 198 putatively annotated analytes for comparisons of their relative quantity according to tissue (A) and sex (B).Significant p-values are indicated in bold (p < 0.05).
7,875
2023-10-01T00:00:00.000
[ "Biology", "Environmental Science" ]
Does the Macro-Temporal Pattern of Road Traffic Noise Affect Noise Annoyance and Cognitive Performance? Noise annoyance is usually estimated based on time-averaged noise metrics. However, such metrics ignore other potentially important acoustic characteristics, in particular the macro-temporal pattern of sounds as constituted by quiet periods (noise breaks). Little is known to date about its effect on noise annoyance and cognitive performance, e.g., during work. This study investigated how the macro-temporal pattern of road traffic noise affects short-term noise annoyance and cognitive performance in an attention-based task. In two laboratory experiments, participants worked on the Stroop task, in which performance relies predominantly on attentional functions, while being exposed to different road traffic noise scenarios. These were systematically varied in macro-temporal pattern regarding break duration and distribution (regular, irregular), and played back with moderate LAeq of 42–45 dB(A). Noise annoyance ratings were collected after each scenario. Annoyance was found to vary with the macro-temporal pattern: It decreased with increasing total duration of quiet periods. Further, shorter but more regular breaks were somewhat less annoying than longer but irregular breaks. Since Stroop task performance did not systematically vary with different noise scenarios, differences in annoyance are not moderated by experiencing worsened performance but can be attributed to differences in the macro-temporal pattern of road traffic noise. Introduction Noise annoyance is one of the most important negative health-related effects of environmental noise [1,2]. For annoyance, exposure-response relationships are typically based on time-averaged metrics, such as the A-weighted equivalent continuous sound pressure level (L Aeq ), the day-night level (L dn ), or the day-evening-night level (L den ) [3][4][5]. However, while such noise metrics have proven to be strong predictors of annoyance (e.g., [4,6]), they ignore other potentially important acoustical and non-acoustical characteristics of a noise situation, in particular the macro-temporal pattern (e.g., [7][8][9][10][11]). The objective of our study therefore was to elucidate the link between the macro-temporal pattern of road traffic noise and annoyance on the one hand, and cognitive performance on the other hand, especially as the latter might moderate annoyance ratings, and because evidence of noise effects on cognitive performance is still scarce [12]. Note that the term "road traffic noise" is used throughout this paper to refer to either road traffic induced "noise" or "sound". The term "road traffic noise" is very common (e.g., [6]). However, strictly speaking, sound and noise are not the same. Sound refers to the physical quantity sound pressure from which acoustical metrics can be derived with calculations or measurements, while noise refers to unwanted sound entailing negative effects on humans (e.g., [6]). As a consequence, studies on negative effects rather refer to noise, while soundscape studies focusing on potentially positive effects refer to sound (e.g., [13]). Road traffic noise and its effects on annoyance and cognitive performance becomes increasingly important as urbanization is progressing. While less than 34% of the global population lived in urban regions in 1960, this number rose to more than 56% globally in 2020 (and to~74% in Europe) [14]. This growth of urban areas goes hand in hand with an increase in noise pollution, in particular due to road traffic. Accordingly, some 113 million Europeans were estimated to be exposed to road traffic noise L den of 55 dB or more in 2017 [15], of which more than 72% lived in urban areas. Increasing road traffic noise calls for effective countermeasures (noise control and mitigation) to be considered by urban planners. They need to know which acoustic qualities and quantities they have to preserve or (re-)create in remnant or newly designed urban spaces. This, however, requires sufficiently funded knowledge on the effects of traffic noise. While much research was dedicated to noise annoyance in the past (e.g., [2]), effects on cognitive performance are less explored [16,17]. A recent systematic review of non-experimental studies on the association between transportation noise and cognitive performance found only 34 papers, which did not allow for a quantitative meta-analysis and were exclusively dedicated to child populations [12]. Thus, studies on mutual effects of road traffic noise on annoyance and cognitive performance of adults are desirable. The macro-temporal pattern of noise and its effect on noise annoyance may be described with different indicators. The number of dominant events, typically defined relative to a threshold (e.g., Number above Threshold, NAT [18]), has been reported to be a promising predictor of annoyance [9,19,20], and also the maximum sound pressure level (L A,max ) is occasionally used for the same purpose [21]. Besides, one may use statistical levels, namely, L 10 , L 50 and L 90 , to describe rare events, average noise levels and background noise [22,23], respectively, or differences between statistical levels to define fluctuation and/or emergence [24]. Further, quietness was suggested as an additional predictor for (reduced) noise annoyance [7,10]. Finally, the eventfulness of noise situations, expressed as intermittency ratio [11], was proposed as an additional indicator for annoyance. Literature indeed suggests annoyance to be associated with such indicators for the macro-temporal pattern of noise. One study found reduced annoyance in highly intermittent road traffic noise situations with only a small number of vehicles per hour [5], which might be the consequence of phases of relative quietness between events, lasting two or more minutes on average. Several other studies emphasized the need to consider quiet periods (i.e., noise breaks) in the assessment of noise impact on public health [8,[25][26][27][28][29]. They suggested that not only the total length of noise breaks, but also their distribution and individual duration could be important [8,25,27], as longer breaks (in total and individually) might mitigate annoyance [8][9][10]25,27]. Here, a minimum duration of noise breaks seemed necessary to be noticeable and effective [25,[27][28][29], which should last one minute, called "a while" ("eine Weile" in German) [25], or three minutes [27][28][29]. Calm periods were also found in [30] to reduce annoyance, while their pattern (regular or irregular) did not have a significant effect. However, with 0.25-1.65 s, the noise breaks were quite short. Thus, the macro-temporal pattern may be decisive for annoyance, but literature on this aspect is still quite scarce. In addition to annoyance, the macro-temporal pattern of road traffic noise may also affect cognitive performance. In everyday life and at work, cognitively demanding tasks often have to be achieved in the presence of background noise. Consequently, the detrimental effects of task-irrelevant sound on cognitive performances have been explored in a multitude of basic cognitive psychological studies (see, e.g., [31][32][33]). However, whereas quite some research focused on chronic effects of road traffic noise on children's cognitive performance [12], surprisingly little evidence is available on acute effects on cognitive performance of adults (e.g., [17,[34][35][36][37]). With regard to the macro-temporal pattern of road traffic noise as constituted by the duration and distribution of noise breaks, the effect on attentional functions is of particular interest. This is because unexpected, salient changes in the acoustic background cause the distraction of the attentional focus from the task to the background sound, so that controlled task-related processes are interrupted. This attentional capture and resulting drop in cognitive performance is known as the "deviance effect" [38]. It occurs because our auditory-cognitive system constantly monitors the acoustic background, at least to a certain extent, even when we are concentrating on a given visual cognitive task unrelated to the noise. In fact, a certain distractibility is an important prerequisite for human survival in potentially threatening environments. However, when focusing on a cognitive task, road traffic noise is arguably irrelevant in all respects. Nonetheless, its macro-temporal pattern may cause attentional capture, in particular the transitions from noisy to quiet periods and back, and/or irregular noise breaks as unanticipated changes in the auditory background. Yet while the length and distribution of noise breaks appear to affect noise annoyance, their effects on attentional capture have not been studied to our knowledge. Since subjective annoyance ratings and cognitive task performance do not necessarily go hand in hand, it is not possible to infer from noise effects on annoyance to cognitive performance effects [39][40][41]. Thus, both effect dimensions should be studied for a comprehensive evaluation of road traffic noise and its macro-temporal pattern, even more so as impacts on cognitive performance might moderate noise annoyance, and as mutual effects of road traffic noise have hardly been studied so far. For example, one might notice that his/her own performance is reduced under road traffic noise, and this is then expressed in a higher subjective annoyance rating. The objective of the present study therefore was to investigate the effects of the macro-temporal pattern of different road traffic scenarios on noise annoyance and objective performance indicators of attentional functions by means of psychoacoustic laboratory experiments. Methodological Approach In this study, two experiments were conducted to investigate the effects of the two independent macro-temporal pattern variables "relative quiet time" and "quiet time distribution" (cf. Section 2.3) on short-term noise annoyance and cognitive performance in a task which predominantly relies on attentional functions: the Stroop task [42]. Experiment 1 investigated the individual and combined effects of the two variables, while experiment 2 focused on the effect of quiet time distribution in more detail. Two different versions of the Stroop task, derived from the colour test [42] and shape test [43], were used (Section 2.2). The latter were identified as suitable in a pilot study to this paper [44], where (i) the difficulty of Stroop tasks necessary for the framework of our study was assessed, (ii) interchangeable Stroop tasks were identified, and (iii) the chosen tasks were applied in a preliminary listening experiment to test their feasibility. The pilot study is described in detail in [44]. Figure 1 gives an overview of the workflow of the experiments. In the following, Section 2.1 introduces the experimental concept of our study, Section 2.2 presents the Stroop tasks, and Section 2.3 the indicators used to quantify the macro-temporal pattern of the road traffic noise scenarios. Section 3 then documents experiment 1 and Section 4 experiment 2. Section 5 discusses the results, before Section 6 gives the major conclusions to our study. Experimental Concept: Unfocussed Listening Experiments In two experiments, subjectively perceived acute noise annoyance reactions (so called "short-term annoyance" [45,46] or "psychoacoustic annoyance" [47]) to road traffic noise scenarios with different macro-temporal pattern were investigated under laboratory conditions. Each scenario was several minutes long (4.5 min in experiment 1 and 10 min in experiment 2) and comprised a number of single car pass-by events. Figure 1. Study design: Pilot study to this paper by Taghipour et al. [44] to iden task versions, experiment 1 on the association of noise annoyance and cognitive relative quiet time (RQT) and quiet time distribution (QTD), and experiment 2 with QTD. Details are given in [44] (pilot study) as well as in Sections 3 and 4 ( 2). Experimental Concept: Unfocussed Listening Experiments In two experiments, subjectively perceived acute noise annoyance re "short-term annoyance" [45,46] or "psychoacoustic annoyance" [47]) to scenarios with different macro-temporal pattern were investigated unde ditions. Each scenario was several minutes long (4.5 min in experiment experiment 2) and comprised a number of single car pass-by events. The listening experiments were designed as "unfocused listening ex [48,49]), where the participants' primary focus was not on the noise sc cognitive task (see below). While focused listening experiments are wide where participants attentively listen to and rate acoustic stimuli of relat tion (usually <1 min; e.g., [45,48]), unfocused experiments are typically pe jective assessment of noise scenarios with considerably longer durations a eral minutes or hours; e.g., [17,49,50]). Furthermore, the latter experimen both measuring the effects of sound on cognitive performance and to annoyance (or other) ratings of the sound situations. In the present study, the participants conducted a visually presente while road traffic noise scenarios were played back. The participants' pr thus on the cognitive task and not on the noise scenarios. However, a noise scenario, the participants rated their noise annoyance. As laborator environment was chosen where an open window was simulated from traffic noise would enter the office (Figure 2). To that aim, a loudspea road traffic noise scenarios was placed in front of the closed window. For moderate exposure scenarios with LAeq of 42-45 dB(A) were chosen, whic tive values for an office environment. The daytime limit value (impact th traffic noise of 60 dB outdoors in residential zones according to Swiss leg [44] to identify suitable Stroop task versions, experiment 1 on the association of noise annoyance and cognitive performance with relative quiet time (RQT) and quiet time distribution (QTD), and experiment 2 on the association with QTD. Details are given in [44] (pilot study) as well as in Sections 3 and 4 (experiments 1 and 2). The listening experiments were designed as "unfocused listening experiments" (e.g., [48,49]), where the participants' primary focus was not on the noise scenarios but on a cognitive task (see below). While focused listening experiments are widely used in studies where participants attentively listen to and rate acoustic stimuli of relatively short duration (usually <1 min; e.g., [45,48]), unfocused experiments are typically performed for subjective assessment of noise scenarios with considerably longer durations as used here (several minutes or hours; e.g., [17,49,50]). Furthermore, the latter experimental set-ups allow both measuring the effects of sound on cognitive performance and to collect subjective annoyance (or other) ratings of the sound situations. In the present study, the participants conducted a visually presented cognitive task, while road traffic noise scenarios were played back. The participants' primary focus was thus on the cognitive task and not on the noise scenarios. However, at the end of each noise scenario, the participants rated their noise annoyance. As laboratory setup, an office environment was chosen where an open window was simulated from which the road traffic noise would enter the office ( Figure 2). To that aim, a loudspeaker playing back road traffic noise scenarios was placed in front of the closed window. For the experiments, moderate exposure scenarios with L Aeq of 42-45 dB(A) were chosen, which are representative values for an office environment. The daytime limit value (impact threshold) for road traffic noise of 60 dB outdoors in residential zones according to Swiss legislation [51] and a sound level attenuation during transmission from the outside to the inside of some −15 dB for tilted windows [52,53] approximately result in the above indoor L Aeq . Likewise, a road traffic noise L den of 53 dB according to the recommendation of WHO [6], corresponding to a daytime L Aeq of~51 dB(A) [54], and a sound level attenuation during transmission from the outside to the inside of some −10 dB for open windows [53] lead to similar values. Besides the actual noise scenarios, constant low background sound was played back with an additional loudspeaker (cf. Section 3.1). The experiments were approved by the ethics committee of Empa (approval CMI 2019-224 of 30 October 2019). They followed general guidelines such as [55,56] and were conducted similarly to previous experiments by the authors (e.g., [21,45]). Stroop Task Versions for Unfocussed Listening Experiments Cognitive performance was tested using different versions of the Stroop task. Details on the Stroop task are given, e.g., in [57]. In its standard version, different colour words are displayed (blue, green, red, yellow) which are either printed in the same colour as their semantic meaning (congruent item; e.g., the word "green" displayed in green colour) or in another colour (incongruent item; e.g., the word "green" displayed in blue) [42] (cf. first row of Figure 3). Participants are asked to respond to the colour in which the word is printed (in the latter example: blue) and not the word's semantic (here: green). Reading the semantics of a word is an automated process for skilled readers, so that in the case of incongruent items the automatically activated word must be inhibited and the correct response-namely, the print colour of the word-must be specifically selected. Therefore, an increase in errors and/or response times occurs for incongruent items compared to congruent items, which is the so-called Stroop effect [42,58]. Performance in the Stroop task relies on attentional functions, namely, selective attention and inhibitory functions, so that it should be sensitive to attentional capture induced by transitions from a quiet period to road traffic noise or vice versa. As working on a large amount of look-alike items for prolonged time periods might become too tiresome, different versions of the Stroop task were used in the present study. Two versions of the Stroop task were identified in a pilot experiment to this study (details see [44]) as sufficiently equivalent with respect to difficulty, interchangeability and observability of the aforesaid Stroop effect (cf. Figure A1 in the Appendix A). The first version was a colour test where, contrary to its standard version ( [42], see above), participants were asked for the semantics of the colour word (instead of its actual print colour) (cf. first row of Figure 3). The second version was a shape test (cf. [43]), where participants were asked to identify the shape of a geometric form, while a written word within specified the same or a different geometric form (cf. second row of Figure 3). Here, congruent items are those in which the semantic meaning of the word and geometric shape match (e.g., the word "rectangle" is printed in a rectangle), while these do not match for incongruent items (e.g., the word "rectangle" is printed in a circle while the latter should be named). In addition to the above two versions of the Stroop Tasks, two variants each were used to keep the task to be processed sufficiently diverse: Stroop Task Versions for Unfocussed Listening Experiments Cognitive performance was tested using different versions of the Stroop task. Details on the Stroop task are given, e.g., in [57]. In its standard version, different colour words are displayed (blue, green, red, yellow) which are either printed in the same colour as their semantic meaning (congruent item; e.g., the word "green" displayed in green colour) or in another colour (incongruent item; e.g., the word "green" displayed in blue) [42] (cf. first row of Figure 3). Participants are asked to respond to the colour in which the word is printed (in the latter example: blue) and not the word's semantic (here: green). Reading the semantics of a word is an automated process for skilled readers, so that in the case of incongruent items the automatically activated word must be inhibited and the correct response-namely, the print colour of the word-must be specifically selected. Therefore, an increase in errors and/or response times occurs for incongruent items compared to congruent items, which is the so-called Stroop effect [42,58]. The different versions/variants of the Stroop task were implemented in a liste test program in the Python-based PsychoPy software environment [59]. The indivi trials were presented on a monitor screen, and responses were given by the particip on a keyboard and stored by the program. Performance in the Stroop task relies on attentional functions, namely, selective attention and inhibitory functions, so that it should be sensitive to attentional capture induced by transitions from a quiet period to road traffic noise or vice versa. As working on a large amount of look-alike items for prolonged time periods might become too tiresome, different versions of the Stroop task were used in the present study. Two versions of the Stroop task were identified in a pilot experiment to this study (details see [44]) as sufficiently equivalent with respect to difficulty, interchangeability and observability of the aforesaid Stroop effect (cf. Figure A1 in the Appendix A). The first version was a colour test where, contrary to its standard version ( [42], see above), participants were asked for the semantics of the colour word (instead of its actual print colour) (cf. first row of Figure 3). The second version was a shape test (cf. [43]), where participants were asked to identify the shape of a geometric form, while a written word within specified the same or a different geometric form (cf. second row of Figure 3). Here, congruent items are those in which the semantic meaning of the word and geometric shape match (e.g., the word "rectangle" is printed in a rectangle), while these do not match for incongruent items (e.g., the word "rectangle" is printed in a circle while the latter should be named). In addition to the above two versions of the Stroop Tasks, two variants each were used to keep the task to be processed sufficiently diverse: • Shape test variant A: oval, square, and triangle (cf. Figure 3 The different versions/variants of the Stroop task were implemented in a listening test program in the Python-based PsychoPy software environment [59]. The individual trials were presented on a monitor screen, and responses were given by the participants on a keyboard and stored by the program. Number of events (N): Since this study used isolated car pass-by events mixed to prepare the scenarios (see below), the number of events, as well as the logarithm log(N) as sometimes used to predict annoyance (e.g., [20]), in each scenario were directly available. Relative Quiet Time (RQT): Based on suggestions by [10], RQT is determined as the ratio of total duration of quiet periods (T quiet ) to total duration of a scenario (T scenario ) [26]. To that aim, T quiet is calculated as the sum of all (individual) quiet periods and divided by T scenario as Intermittency Ratio (IR, %): IR is a measure for the eventfulness of a noise scenario [11]. It expresses the proportion of the acoustical energy of all individual noise events relative to the total sound energy of a scenario as where L Aeq,T,Events is calculated from contributions of events exceeding a given threshold K. In contrast to other descriptors working with thresholds, the latter is not constant, but defined dynamically relative to the L Aeq of the scenarios using where C is a constant offset, set to 3 dB. IR ranges from 0-100%. An IR larger than 50% indicates that more than half of the total sound energy is due to distinct pass-by events. In situations where all events clearly emerge from background noise (e.g., at a receiver close to a railway track), IR gets close to 100%, while constant road traffic as observed from a receiver not too close to a motorway yields only small IR values. Note that while a high IR is a precondition for noise breaks (large RQT) to occur, it does not allow studying the effect of QTD (i.e., the temporal distribution and length of the noise breaks). Centre of Mass Time (CMT): CMT is an indicator for quiet periods which penalizes the fragmentation of quiet periods and rewards their clustering and thus increases with longer quiet time periods [8]. It is calculated as where t i is the duration of the i-th (individual) quiet period in the scenario (in seconds). Quiet Time Distribution (QTD): QTD is a categorical variable for the nature of noise breaks. Here, it discriminates between regular and irregular temporal distribution of the breaks as well as between different durations of the irregular noise breaks. Experiment 1 In experiment 1, the individual and combined effects of the independent macrotemporal pattern indicators RQT and QTD on noise annoyance and cognitive performance in the Stroop task were investigated. Audio Processing and Resulting Road Traffic Noise Scenarios Road traffic noise scenarios (WAVE PCM format) were prepared in MATLAB Version 2019a (The MathWorks, Inc., Natick, MA, USA) from stereo recordings with a Jecklin disk setup made within a previous study [45], of individual car pass-by events which were dominated by tire/road noise. Since the laboratory setup should represent an office environment in which the road traffic noise enters through an open window, the signals were down-mixed from stereo to mono by means of crossfading. The recordings, processing, and playback was carried out at a sampling frequency of 44.1 kHz. Road traffic noise scenarios were created from excerpts of the individual car pass-by events by mixing them together sequentially (and sometimes slightly overlapping) in time. After careful inspection of the audio files (audibly as well as based on their A-weighted and FAST-time weighted level-time histories, L AF ), an average duration of 10 s was chosen for the excerpts. However, to obtain realistic sound scenarios, three excerpts, of 9, 10, and 11 s length, were cut from each signal. One of these three excerpts per event was randomly chosen for the preparation of a scenario. The excerpts were gated with raised-cosine ramps of 2 s. They were further highpass and lowpass filtered at 52 Hz and 10 kHz, respectively, to consider the limits of the loudspeaker at low frequencies and inherent recording noise at high frequencies. In total, seven scenarios, each lasting 4.5 min, were prepared for experiment 1. Additionally, two 30 s long road traffic noise scenarios were created for the participant's familiarization period with the noise and the cognitive task at the beginning of the experimental session. The road traffic noise scenarios covered four levels of RQT, namely, 0.0% (corresponding to 36 car pass-by events), 44.3% (15 events), 62.9% (10 events), and 81.5% (5 events). Further, two types of QTD were used for the quiet periods: either a regular distribution (referred to as "regular" in the following account) or a combination of short quiet periods and two longer (1-min) quiet periods (referred to as "irregular"). While the situation with 0.0% RQT served as a reference without quiet periods, the three levels of RQT (44.3%, 62.9%, 81.5%) were combined with the two QTD types, (total of 3 × 2 + 1 = 7 road traffic noise scenarios). All road traffic noise scenarios had the same L Aeq of 54 dB(A) at the window (measured 50 cm away from and in front of the loudspeaker) and of 44.5 dB(A) at the participant's ear level at the desk. As the number of car pass-by events varied between scenarios, the L Aeq of the individual pass-by events had to be adjusted. Figure 4 shows the level-time histories of the road traffic noise scenarios, visualizing different distributions and resulting lengths of the quiet periods, and Figure 5 the corresponding one-third octave spectra, which were all very similar. Table 1 presents the indicators for the resulting macro-temporal pattern of the scenarios, and Table A1 in the Appendix A presents the correlation analysis using Spearman's rank correlation coefficient (r s ) [60] for the continuous indicators, as a measure of similarity of the indicators without an a priori assumption of a linear relation. While the L AF,max generally decreases with increasing number of events to obtain the same overall L Aeq for all scenarios, a few events of scenarios S5 and S6 (each encompassing 15 events) had a similar L AF,max as the events of S3 and S4 (each encompassing 10 events), so that the L AF,max were almost identical for those four scenarios (Table 1). N, RQT, IR and L AF,max were closely correlated to each other. CMT, in contrast was not correlated to these indicators (Table A1), but was closely related to QTD, with substantially larger values for irregular than for regular distributions (Table 1). Thus, with N, IR and L AF,max being closely related to RQT and CMT being closely related to QTD, the association of the macrotemporal pattern with annoyance and cognitive performance was mainly investigated with RQT and QTD (cf. Sections 3.4 and 3.5). (44.3%, 62.9%, 81.5%) were combined with the two QTD types, (total of 3 × 2 + 1 = 7 road traffic noise scenarios). All road traffic noise scenarios had the same LAeq of 54 dB(A) at the window (measured 50 cm away from and in front of the loudspeaker) and of 44.5 dB(A) at the participant's ear level at the desk. As the number of car pass-by events varied between scenarios, the LAeq of the individual pass-by events had to be adjusted. Figure 4 shows the level-time histories of the road traffic noise scenarios, visualizing different distributions and resulting lengths of the quiet periods, and Figure 5 the corresponding onethird octave spectra, which were all very similar. Table 1 presents the indicators for the resulting macro-temporal pattern of the scenarios, and Table A1 in the Appendix A presents the correlation analysis using Spearman's rank correlation coefficient (rs) [60] for the continuous indicators, as a measure of similarity of the indicators without an a priori assumption of a linear relation. While the LAF,max generally decreases with increasing number of events to obtain the same overall LAeq for all scenarios, a few events of scenarios S5 and S6 (each encompassing 15 events) had a similar LAF,max as the events of S3 and S4 (each encompassing 10 events), so that the LAF,max were almost identical for those four scenarios (Table 1). N, RQT, IR and LAF,max were closely correlated to each other. CMT, in contrast was not correlated to these indicators (Table A1), but was closely related to QTD, with substantially larger values for irregular than for regular distributions (Table 1). Thus, with N, IR and LAF,max being closely related to RQT and CMT being closely related to QTD, the association of the macro-temporal pattern with annoyance and cognitive performance was mainly investigated with RQT and QTD (cf. Sections 3.4 and 3.5). Note that in addition to these road traffic noise scenarios, the participants were exposed to a constant background sound with an L Aeq of 30 dB(A), which was a combination of filtered pink noise (played back via an additional loudspeaker) and sound from a low-level running office air conditioning system. The additional loudspeaker was located at the wall in front of and above the participant, at the same height as the running low-level office air-conditioning system, so that both sounds were received from roughly same direction and combined to one background sound source. The background sound helped masking possible low-level sounds from outside the office environment, which was not an isolated listening booth. In addition, a sign was put up during the experiments in the corridor outside the office, asking passers-by to be silent. Thus, sounds from outside the office were minimized. With the played-back background sound being constant and~15 dB lower than the actual road traffic noise scenarios, both sound sources (sound outside the office and background sound) are negligible as a source of bias for the annoyance ratings. Also, even if the background sound within the mock office would have somewhat affected the participants' perception and/or performance, this is something that would also be present in a real office environment. Figure 5. One-third octave spectra of the road traffic noise scenarios in experiment 1. S0-S6 refer to scenario 0 (reference) to 6 (cf. Table 1). Note that in addition to these road traffic noise scenarios, the participants were exposed to a constant background sound with an LAeq of 30 dB(A), which was a combination of filtered pink noise (played back via an additional loudspeaker) and sound from a lowlevel running office air conditioning system. The additional loudspeaker was located at the wall in front of and above the participant, at the same height as the running low-level office air-conditioning system, so that both sounds were received from roughly same direction and combined to one background sound source. The background sound helped masking possible low-level sounds from outside the office environment, which was not an isolated listening booth. In addition, a sign was put up during the experiments in the corridor outside the office, asking passers-by to be silent. Thus, sounds from outside the office were minimized. With the played-back background sound being constant and ~15 dB lower than the actual road traffic noise scenarios, both sound sources (sound outside the office and background sound) are negligible as a source of bias for the annoyance ratings. Also, even if the background sound within the mock office would have somewhat affected the participants' perception and/or performance, this is something that would also be present in a real office environment. Experimental Procedure The experiments were conducted in single sessions in English. To ensure sufficient understanding of the experimental tasks, one requirement for study participation was to have good self-reported English language skills. In addition, after task instruction the participants could ask the experimenter in case of ambiguities. One-third octave spectra of the road traffic noise scenarios in experiment 1. S0-S6 refer to scenario 0 (reference) to 6 (cf. Table 1). Experimental Procedure The experiments were conducted in single sessions in English. To ensure sufficient understanding of the experimental tasks, one requirement for study participation was to have good self-reported English language skills. In addition, after task instruction the participants could ask the experimenter in case of ambiguities. Participants first answered questions about their hearing status, vision, and wellbeing for inclusion and exclusion criteria, which were (i) self-reported normal hearing (not hearing impaired), (ii) self-reported normal or corrected-to-normal vision (but not colour blind), (iii) legal age (18 years or older) and (iv) feeling well (not further specified). Thereafter, they read instructions on the road traffic noise scenarios, the cognitive task and the test program. To familiarize them with the two versions of the Stroop task, the two short road traffic noise scenarios were used: Participants worked on trials of the colour version of the Stroop task during the first short scenario and of the shape version during the second one. Then, data collection in the actual listening experiment started. During each noise scenario, the participant worked on trials of one version of the Stroop task for the first 135 s and then of the other version for the second 135 s. Congruent and incongruent trials were presented in random order. An overall mixing ratio of approximately 50% each was secured by the program increasing the probability of drawing either the congruent or incongruent trials after 60% of a noise scenario's duration. Participants were asked to respond to the semantics of the colour word (colour version) or the shape of the geometric form (shape version) as fast and as accurately as possible. Immediately after the participant's response (without any time delay), the next trial started automatically. There was only a break in Stroop tasks between the noise scenarios, when no sound was played back. The participants did the Stroop task self-paced, which resulted in a different number of trials per participant and noise scenario, depending on how fast they worked on the tasks. The sequence of the two Stroop versions was randomized for each noise scenario, as was the sequence of the noise scenarios. After each noise scenario, participants answered the following question, which was adapted from the ICBEN noise annoyance question [3,61]: "What number from 0 to 10 represents best how much you were bothered, disturbed, or annoyed by the sound?" The participants gave their rating by means of a slider in the test program on the unipolar numerical ICBEN 11-point scale. As the spacing of the 11-point scale is equal (and thus interval-scaled), it allows treating the data as continuous in statistical analyses, even though by definition the scale is ordinal [3]. This is supported by literature, given that the ordinal variable has five or more categories [62][63][64]. After a break of 30 s the next noise scenario started. The total experiment lasted approximately 50 min, with the actual unfocussed listening test taking around 35 min. Participants The participants were mostly recruited within Empa, via internal online advertisement or direct verbal recruitment. Twenty-four persons (11 females and 13 males), aged between 19 and 63 years (median of 28.5 years), participated in experiment 1. This number of participants lies well within the range of 16-32 participants proposed in [55] to obtain reliable experimental results. All participants fulfilled the requirements for participation (self-reported normal hearing, self-reported normal or corrected-to-normal vision, not colour blind, legal age and feeling well, see above). Written consent for participation was collected from all participants. Data Analysis Annoyance: In total 168 annoyance ratings were obtained (i.e., 24 participants × 7 road traffic noise scenarios). Performance: Task completion was self-paced, i.e., each participant had an individual pace in completing the tasks. This resulted in different amounts of worked-out trials per noise scenario and participant. On average, 208 trials in the Stroop task were worked-out, ranging from 85-265 trials per participant and traffic noise scenario, meaning that the slowest participant completed 82 trials during one specific noise scenario, and the fastest participant 262 trials during one specific noise scenario. In sum, a total of 34,911 individual responses (trials) were available and processed as follows. Reaction times (RTs; in ms): Each trial not correctly worked-out counted as an error. As usual in analysis of RTs, error trials were removed from the data set, as cognitive mechanisms might have been different from those involved in successful task processing. In a second step, long RTs (exceeding 2 standard deviations of mean overall RTs of the experiment, corresponding to RTs > 1771 ms) were removed, as again other mechanisms might have played a role (e.g., the participant re-reading the instructions on the task or accidentally pressing a response key). In total, 3000 individual responses (trials) (9.1%) were removed. In a last step, the remaining 31,911 individual responses were averaged per participant and road traffic noise scenario separately for congruent and for incongruent trials to obtain mean RTs (data set with a total of 336 entries). Error rate (ER; in %): In a first step, individual colour and shape task versions/variants (cf. Section 2.2) per participant with too high rates of wrong answers (namely, ER > 10%) were removed, as these tasks were likely misunderstood by the participants (e.g., answering the colour instead of the required semantics of the word). In total, 3,410 trials (9.8%) were thus removed. The remaining 31,501 individual trials were again averaged per participant and noise scenario separately for congruent and incongruent trials to obtain the mean ERs (data set with a total of 336 entries). The data was statistically analysed, separately for annoyance on the one hand, and RT and ER as measures of cognitive performance on the other hand. To that aim, linear mixed-effects models were established (see, e.g., [65]). These models allow separating fixed effects (here, the variables RQT and QTD, which were correlated with the other indicators, cf. Section 3.1) and random effects (the participants, modelled with a simple random intercept: one for each participant). Further, the playback number (i.e., the serial position with which the noise scenarios had been played) was included to test for order effects [66]. The statistical analysis was done with IBM SPSS Version 25 using the procedure MIXED. Table 2 shows the correlations (Spearman's rank correlation coefficient r s [60] and Pearson's r, the latter assuming a linear relation) of the annoyance ratings with the continuous indicators for the temporal pattern. Both correlation analyses reveal the same insights, although correlation with Spearman's r s is less strong than with Pearson's r. Annoyance increased with increasing N (more events) and CMT (i.e., longer noise breaks, indicating irregular distribution of the events), but decreased with increasing RQT (longer total quiet time), IR (increasingly dominant, here meaning less, single events) and L AF,max (louder, here meaning less, events). As the acoustical indicators are closely correlated to either CMT or QTD (cf . Table A1), the following account focusses on RQT and QTD. As Table 2 reveals, the correlations are rather moderate. One reason for this is that the correlation analysis was performed for the individual annoyance data (168 ratings: cf. Section 3.4) without accounting for individual differences between participants' ratings. This shortcoming is overcome by the subsequent hierarchical mixed-effects models, where the participants are modelled with a random intercept. Spearman's r s 0.14 † 0.14 † -0.14 † -0.10 0.15 † -0.15 * Pearson's r 0.22 ** 0.18 ** -0.20 ** -0.23 ** 0.15 * -0.16 ** † p < 0.08, * p < 0.05, ** p < 0.01. Figure 6 shows the association of annoyance with RQT and QTD. RQT increasing from 0% to 44-81% was associated with decreased annoyance. QTD was linked with annoyance as well, with regular breaks being less annoying than irregular breaks. An interaction between RQT and QTD was not observable (Figure 6c). Besides, annoyance increased with playback number increasing from 1-7 (not shown). This simple order effect was expected and observed in other studies by the same authors (e.g., [21,45]), indicating that the participants got increasingly annoyed by the road traffic noise scenarios over time. Annoyance Linear mixed-effects modelling analysis confirmed these observations and significant differences between regular and irregular QTD (cf. Figure 6b,c). Here, two models are reported, which either relate annoyance to RQT (model M RQT ) or to QTD (model M QDT ). The first model, M RQT , reveals the dependence of annoyance on the continuous variables RQT and playback number (PN). This model takes into account all noise scenarios, S0-S6. In Equation (5), Annoy is the dependent variable annoyance, µ denotes the overall grand mean, β 1 and β 2 are regression coefficients for the continuous variables RQT and PN, respectively, of the seven scenarios (S0-S6), u is the participants' random intercept (k = 1-24), and the error term ε is the random deviation between observed and expected values of Annoy. Table 3 gives the model coefficients. The model M RQT shows that annoyance significantly decreases by 1.4 units on the 11-point scale when RQT increases from 0-81% (cf. Figure 6a), and significantly increases by 1.4 units with a playback number increase from 1-7 (incidentally a very similar increase as for RQT increasing from 0-81%). Figure 6 shows the association of annoyance with RQT and QTD. RQT increasing from 0% to 44-81% was associated with decreased annoyance. QTD was linked with annoyance as well, with regular breaks being less annoying than irregular breaks. An interaction between RQT and QTD was not observable (Figure 6c). Besides, annoyance increased with playback number increasing from 1-7 (not shown). This simple order effect was expected and observed in other studies by the same authors (e.g., [21,45]), indicating that the participants got increasingly annoyed by the road traffic noise scenarios over time. (a) (b) (c) Figure 6. Noise annoyance as a function of (a) relative quiet time (RQT), (b) quiet time distribution (QTD) and (c) both RQT and QTD as found in experiment 1. Circles represent mean observed values (Obs.) with standard error bars, and lines the corresponding mixed-effects models with 95% confidence intervals, in (b) as horizontal lines with confidence intervals. In (b,c), significant differences between estimated marginal means (p < 0.05; pairwise comparisons with Bonferroni correction) of regular and irregular QTD are indicated by differing letters. Linear mixed-effects modelling analysis confirmed these observations and significant differences between regular and irregular QTD (cf. Figure 6b,c). Here, two models are reported, which either relate annoyance to RQT (model MRQT) or to QTD (model MQDT). The first model, MRQT, reveals the dependence of annoyance on the continuous variables RQT and playback number (PN). This model takes into account all noise scenarios, S0-S6. In Equation (5), Annoy is the dependent variable annoyance, μ denotes the overall grand mean, β1 and β2 are regression coefficients for the continuous variables RQT and PN, respectively, of the seven scenarios (S0-S6), u is the participants' random intercept (k = 1-24), and the error term ε is the random deviation between observed and expected RQT (%) Noise annoyance Figure 6. Noise annoyance as a function of (a) relative quiet time (RQT), (b) quiet time distribution (QTD) and (c) both RQT and QTD as found in experiment 1. Circles represent mean observed values (Obs.) with standard error bars, and lines the corresponding mixed-effects models with 95% confidence intervals, in (b) as horizontal lines with confidence intervals. In (b,c), significant differences between estimated marginal means (p < 0.05; pairwise comparisons with Bonferroni correction) of regular and irregular QTD are indicated by differing letters. The second model, M QDT , reveals how annoyance is linked to QTD. In this model, only six scenarios, S1-S6, are taken into account, since no level of QTD is applicable for S0 with RQT of 0% (cf. Table 1). In the absence of S0, RQT is not linked to annoyance (p > 0.8; also obvious in Figure 6c). Also, there was no significant interaction between RQT and QTD (p > 0.7; cf. Figure 6c). Model M QDT therefore reduces to In Equation (6), τ QTD is the categorical variable QTD (2 levels: i = 1, 2 for regular and irregular) of the six scenarios (S1-S6), and the other variables have the same notation as in Equation (5). Table 4 gives the model coefficients. According to model M QTD , annoyance is significantly higher for longer, irregular than for shorter, regular breaks, but the difference of 0.7 points on the 11-point scale is moderate (cf. Figure 6b). Further, annoyance significantly increases with playback number (as in above model M RQT ). Cognitive Performance Performance data was first checked for the Stroop effect with a simple model considering congruency as the sole fixed effect. In fact, the Stroop effect was found for both, RTs and ERs: Overall, the effect of congruency was highly significant for RTs (p < 0.001), with incongruent trials (mean RT = 682 ms; standard deviation SD = 148 ms) being answered 31 ms (or 5%) slower than congruent trials (mean RT = 652 ms, SD = 138 ms), as usual in the Stroop paradigm. Furthermore, the Stroop effect was also found for ERs (p < 0.05), with more errors been made in incongruent trials (mean ER = 2.4%, SD = 2.6%) than in congruent trials (mean ER = 2.0%, SD = 2.2%). Consequently, the effects of the different road traffic noise scenarios on RTs and ERs were analysed separately for congruent and incongruent trials in the following. RT: Figure 7 shows the association of RT with RQT and QTD, separately for congruent and incongruent trials in the Stroop task. RT was not linked to RQT, except that it tended to be somewhat longer for the longest RQT (81%) than the other RQTs (0-63%) (Figure 7a). RT, however, was linked to QTD, being somewhat longer for regular than irregular breaks (Figure 7b). Congruent and incongruent stimuli were affected similarly strong. Besides, RT decreased with increasing playback number (not shown) as participants got quicker with answering the trials of the Stroop task over time, indicating that they got increasingly practiced. Linear mixed-effects model analysis again confirmed these observations and significant differences between regular and irregular QTD (cf. Figure 7b). It revealed that RT was not significantly associated with RQT for incongruent (p = 0.29) and congruent trials (p = 0.65) (cf. Figure 7a), but with QTD (p's < 0.05; Figure 7b) and playback number (p's < 0.001) for both incongruent and congruent trials (details not shown). While the effect of QTD was significant, it was quite small (less than 30 ms compared to overall~650 ms RTs on average, corresponding to a relative change of less than 5%; cf. Figure 7b). RT decreased by some 140 and 130 ms for incongruent and congruent trials, respectively, with playback number increasing from 1-7. (Figure 7a). RT, however, was linked to QTD, being somewhat longer for regular than irregular breaks (Figure 7b). Congruent and incongruent stimuli were affected similarly strong. Besides, RT decreased with increasing playback number (not shown) as participants got quicker with answering the trials of the Stroop task over time, indicating that they got increasingly practiced. Linear mixed-effects model analysis again confirmed these observations and significant differences between regular and irregular QTD (cf. Figure 7b). It revealed that RT was not significantly associated with RQT for incongruent (p = 0.29) and congruent trials (p = 0.65) (cf. Figure 7a), but with QTD (p's < 0.05; Figure 7b) and playback number (p's < 0.001) for both incongruent and congruent trials (details not shown). While the effect of QTD was significant, it was quite small (less than 30 ms compared to overall ~650 ms RTs on average, corresponding to a relative change of less than 5%; cf. Figure 7b). RT decreased by some 140 and 130 ms for incongruent and congruent trials, respectively, with playback number increasing from 1-7. ER: In both incongruent and congruent trials, ER varied neither with RQT nor with QTD nor with playback number (not shown), as also confirmed by mixed-effects model analysis (p's > 0.30 for RQT, p's > 0.26 for QTD, p's > 0.23 for playback number). Experiment 2 In experiment 2, the effects of QTD were explored in more detail. A new sample of volunteers was recruited; no one participated in both experiments. Audio Processing and Resulting Road Traffic Noise Scenarios Three road traffic noise scenarios (WAVE PCM format) were again prepared in MATLAB Version 2019a (The MathWorks, Inc., Natick, MA), in the same way and from the same recordings as in experiment 1. Furthermore, participants were also exposed to the same constant background sound at an LAeq of 30 dB(A) (Section 3.1). Each of the three noise scenarios was 10 min long. For training, the same two 30 s long noise scenarios as in experiment 1 were used. Experiment 2 In experiment 2, the effects of QTD were explored in more detail. A new sample of volunteers was recruited; no one participated in both experiments. Audio Processing and Resulting Road Traffic Noise Scenarios Three road traffic noise scenarios (WAVE PCM format) were again prepared in MAT-LAB Version 2019a (The MathWorks, Inc., Natick, MA), in the same way and from the same recordings as in experiment 1. Furthermore, participants were also exposed to the same constant background sound at an L Aeq of 30 dB(A) (Section 3.1). Each of the three noise scenarios was 10 min long. For training, the same two 30 s long noise scenarios as in experiment 1 were used. The three road traffic noise scenarios had the same RQT and L AF,max of the individual car pass-by events, but differed with respect to QTD. Three levels of QTD were used: regular quiet periods, a combination of short quiet periods and six 1-min quiet periods, or two 3-min quiet periods ("irregular"). Each noise scenario contained 25 car pass-by events. The scenarios had an L Aeq of 51 dB(A) at the window (measured 50 cm away from and in front of the loudspeaker) and of 41.5 dB(A) at participant's ear level at the desk. Figure 8 shows the level-time histories of the scenarios with different QTDs and resulting lengths of the noise breaks, and Figure 9 their corresponding one-third octave spectra, which were all identical because the same individual car pass-by events were used to generate the three scenarios. Table 5 presents the indicators for the resulting macro-temporal pattern of the scenarios. Here, the association of the macro-temporal pattern with annoyance and cognitive performance was mainly investigated with QTD (as CMT was closely related to QTD, cf. Section 3.1), while RQT, N, and L AF,max were the same for S1-S3 and IR varied only little (Table 5). of the noise breaks, and Figure 9 their corresponding one-third octave spectra, which were all identical because the same individual car pass-by events were used to generate the three scenarios. Table 5 presents the indicators for the resulting macro-temporal pattern of the scenarios. Here, the association of the macro-temporal pattern with annoyance and cognitive performance was mainly investigated with QTD (as CMT was closely related to QTD, cf. Section 3.1), while RQT, N, and LAF,max were the same for S1-S3 and IR varied only little (Table 5). Figure 9. One-third octave spectra of the road traffic noise scenarios in experiment 2. S1-S3 refer to noise scenario 1-3 (cf. Table 5). Note that the three spectra are identical because the same car passby events were used to generate the three scenarios. cognitive performance was mainly investigated with QTD (as CMT was closel QTD, cf. Section 3.1), while RQT, N, and LAF,max were the same for S1-S3 an only little (Table 5). Figure 9. One-third octave spectra of the road traffic noise scenarios in experiment 2. S noise scenario 1-3 (cf. Table 5). Note that the three spectra are identical because the sa by events were used to generate the three scenarios. Figure 9. One-third octave spectra of the road traffic noise scenarios in experiment 2. S1-S3 refer to noise scenario 1-3 (cf. Table 5). Note that the three spectra are identical because the same car pass-by events were used to generate the three scenarios. Experimental Procedure The procedure of experiment 2 closely followed that of experiment 1. Experiment 2 was conducted in single sessions in English. It lasted 45-50 min, with the actual unfocused listening test taking around 32 min. Participants The participants were again mostly recruited within Empa, via internal online advertisement or direct verbal recruitment. Twenty-five persons (12 females and 13 males), aged between 26 and 61 years (median of 33.0 years) participated in experiment 2. All participants fulfilled the requirements for participation (self-reported normal hearing, self-reported normal or corrected-to-normal vision, not colour blind, legal age and feeling well; cf. Section 3.2). Written consent was collected from all participants. Performance: Since task completion was self-paced, different amounts of worked-out trials resulted per participant and road traffic noise scenario. On average, 452 trials in the Stroop tasks were worked-out, ranging from 301-593 trials per participant and noise scenario. In total, 33,915 individual responses (trials) were available and processed analogously as in experiment 1 (Section 3.4), removing error trials as well as RTs exceeding 2 standard deviations of mean overall RTs, corresponding to RTs > 1724 ms. Thus, 2688 individual trials (8.3%) were removed for RT analysis. For ER analysis, 3153 individual trials (9.3%) of task versions/variants with too high rates of wrong answers (again, ER > 10%) were removed to ensure sufficient task understanding. The remaining 31,227 (RT) and 30,762 individual trials (ER) were then averaged per participant, noise scenario and congruency (congruent/incongruent trials) to obtain the mean RTs (in ms) and ERs (in %) (data set with a total of 150 entries). As in experiment 1, the data was statistically analysed with linear mixed-effects models, separately for annoyance, RT and ER. As fixed effects, QTD as well as the playback number were used, and as random effects the participants (simple random intercept). The statistical analysis was again performed with IBM SPSS Version 25 using the procedure MIXED. Figure 10 shows the association of annoyance ratings with QTD. In line with experiment 1 (Figure 6b), annoyance was associated with QTD. The longest (3-min) breaks were somewhat more annoying than shorter breaks (irregular 1-min or even shorter, regular breaks). In contrast to experiment 1, however, the shorter irregular 1-min breaks were associated with very similar mean annoyance ratings as the regular breaks. In line with these observations, linear mixed-effects model analysis (Table 6), using the approach of Equation (6) (model MQDT, but with τQTD with 3 levels, i = 1-3, for regular and irregular with 1-min or 3-min breaks), revealed that the overall association of annoyance with QTD was not significant (p = 0.13). In fact, only the annoyance to the 3-min and 1-min irregular breaks was in tendency different by ~0.6 units on the 11-point scale (p = In line with these observations, linear mixed-effects model analysis (Table 6), using the approach of Equation (6) (model M QDT , but with τ QTD with 3 levels, i = 1-3, for regular and irregular with 1-min or 3-min breaks), revealed that the overall association of annoyance with QTD was not significant (p = 0.13). In fact, only the annoyance to the 3-min and 1-min irregular breaks was in tendency different by~0.6 units on the 11-point scale (p = 0.06; Figure 10). Again, playback number was significantly linked to annoyance (p < 0.001). Table 6. Model coefficients (Coeff.), 95% confidence intervals (CI) and probability values (p) of the linear mixed-effects model M QDT for annoyance in experiment 2. The parameters and symbols are explained in Equation (6) of experiment 1 (but with τ QTD with 3 levels). Cognitive Performance As in experiment 1, the performance data was first checked for the Stroop effect with a simple model considering congruency as the sole fixed effect. For both RT and ER a highly significant effect of congruency was given (p < 0.001), due to prolonged RTs and higher ERs during incongruent compared to congruent trials. Overall, incongruent trials (mean RT = 722 ms, SD = 119 ms) were answered 31 ms (or 5%) slower than congruent trials (mean RT = 691 ms, SD = 114 ms), and more errors were made in incongruent (mean ER = 2.0%, SD = 2.2%) than in congruent trials (mean ER = 1.3%, SD = 1.9%). Consequently, the effects on RTs and ERs were analysed separately for congruent and incongruent trials. RT: Figure 11 shows the association of RTs with QTD, separately for congruent and incongruent trials in the Stroop task. RTs were linked to QTD, being longer for the longer (3-min) irregular breaks than the shorter (1-min) irregular and the regular breaks. This contrasts experiment 1, where the RTs were longer for the regular than the irregular (1-min) breaks ( Figure 7). Besides, RTs decreased with increasing playback number (not shown). Congruent and incongruent trials were again affected similarly strong. contrasts experiment 1, where the RTs were longer for the regular than the irregular (1min) breaks ( Figure 7). Besides, RTs decreased with increasing playback number (not shown). Congruent and incongruent trials were again affected similarly strong. These observations and significant differences between long irregular and short irregular/regular QTD were confirmed by linear mixed-effects model analysis, which showed that RTs were significantly associated with QTD (p < 0.02) and playback number These observations and significant differences between long irregular and short irregular/regular QTD were confirmed by linear mixed-effects model analysis, which showed that RTs were significantly associated with QTD (p < 0.02) and playback number (p < 0.001) (details not shown). While the effect of QTD was significant, it was again small (around 30 ms compared to~700 ms RTs on average, corresponding to a relative change of~4%). RTs decreased with playback number increasing from 1-3 by some 100 and 90 ms for incongruent and congruent stimuli, respectively. ER: In both congruent and incongruent trials, ER was neither associated with QTD nor playback number, which was also confirmed by mixed-effects model analysis (p's > 0.65 for QTD, p's > 0.05 for playback number). Discussion This study performed two unfocussed laboratory listening experiments to study how the macro-temporal pattern of different road traffic noise scenarios with rather low L Aeq of~45 dB(A) (experiment 1) and~42 dB(A) (experiment 2), as might be expected in an office environment, affected short-term noise annoyance and cognitive performance in the Stroop task. A range of indicators for the macro-temporal pattern of the scenarios, including relative quiet time (RQT) and quiet time distribution (QTD), were quantified. Annoyance The experiments confirmed that quiet periods affect annoyance, revealing that annoyance ratings decreased with increasing RQT, at least up to some 60% ( Figure 6). This is in line with literature [8][9][10]25,27,30,67]. Further, annoyance was linked with QTD. Shorter but more regular breaks were found to be perceived as less annoying than longer but irregular breaks of identical total duration. Similar insights as with RQT and QTD may also be obtained with the other indicators for the macro-temporal pattern (Table 2), which were closely related to either RQT or QTD (Table A1). For example, the number of events (negatively correlated with RQT) positively correlates with annoyance, which was also found for aircraft noise in [20], while IR (positively correlated with RQT) shows a negative correlation with annoyance, confirming the findings of [5]. In interpreting our results on IR, one should keep in mind that with the exception of the reference scenario S0, all scenarios were highly intermittent (cf. Figures 4 and 8), with IR values of 74% and more. Our findings suggest that, at the same RQT (with the same number of events), the clustering of car pass-by events after prolonged quiet times (irregular QTD), giving a more distinct temporal pattern, was more annoying to the participants than the shorter but regular events. Thus, to optimize QTD in order to minimize annoyance, providing a smooth traffic flow without too many interruptions, e.g., by reducing traffic lights, might be beneficial. In line with this thought, a laboratory study found that at high traffic densities, road traffic noise at a roundabout was perceived as less unpleasant than at crossroads with traffic lights [68]. RQT, in contrast, can only be optimized (meaning, increasing the breaks) through reduced the traffic volume (e.g., with traffic and parking restrictions and charges in cities), which also positively affects the L Aeq . The present results on QTD contrast the conclusions of previous studies that suggest a minimal duration of one [25] or three minutes [27][28][29] for a quiet period to be valuable with respect to annoyance, and of another laboratory study that did not find the duration of quiet periods to affect annoyance [67]. Thus, while breaks between events (i.e., having certain quiet periods, here: RQT) do seem beneficial, the link of the distribution of noise breaks with annoyance was less clear, and the necessity of a minimal duration of the noise breaks could not be confirmed. However, given the relatively low sound exposure in the experiments with an L Aeq of~42-45 dB(A), the effects were moderate only, changing annoyance by 1.4 units on the 11-point scale for a RQT increase from 0-81%, and 0.5-0.7 units for longer irregular compared to shorter quiet times (QTD). Overall, the moderate association of annoyance with relatively low-level road traffic noise (L Aeq of 42-45 dB(A)) is in line with a recent laboratory study that found the link between subjective disturbance and road traffic noise with an L Aeq of 35-41 dB(A) to be quite weak [16]. Cognitive Performance Compared to annoyance, the association of the macro-temporal pattern with cognitive performance in terms of RT and ER in the Stroop task was less clear. While RQT did not affect performance, QTD was slightly linked to RTs, but the results of experiments 1 and 2 were not clear-cut. In experiment 1, short regular breaks were found to be associated with longer RTs than short irregular breaks (Figure 7), but not in experiment 2. Here, long irregular breaks resulted in prolonged RTs ( Figure 11). Yet in both experiments, the association of RTs with QTD, while significant, was weak, with small relative changes in RT of less than 5%. Further, no association of ER with the macro-temporal pattern of the noise scenarios was found. Similar results were also found in a preliminary listening experiment to this study [44], where road traffic noise neither affected RT nor ER. This unsystematic effect pattern of the different noise scenarios on performance in the Stroop task might be due to their effect on attentional functions being comparatively smaller than their effect on noise annoyance, and because the applied experimental procedure did not allow for a more sensitive analysis of performance data. That is to say, the road traffic noise scenarios used in this experiment may have had too few salient changes (deviants) in terms of transitions from noisy to quiet periods (and back) diverting the attentional focus away from the task at hand to measure an effect on performance in the Stroop task when considering all trials worked out. However, the analysis of performance data could not be limited to those trials of the Stroop task that were performed at the time of, or shortly after, the salient changes in the road traffic noise scenarios. This was because the processing of the Stroop trials was self-paced in the present experiments, so that the relevant individual trials in the cognitive task could not be identified. In contrast, the above-mentioned laboratory study [16] found transitional phases in road traffic noise scenarios to affect reading task performance. Reading speed decreased as the sound level increased (rising front of an event) and increased again during the descending front. Nevertheless, the typical Stroop effect was found in both experiments. That is, RTs were prolonged and ERs were increased for incongruent items, in which two dimensions of the visual stimulus did not match, compared to congruent items. This indicates that the participants seriously worked on the given cognitive task, and that our study in fact comprised unfocused listening experiments to investigate annoyance. Since performance in the Stroop task versions used here hardly changed during the different road traffic noise scenarios and, moreover, did not change systematically between the two experiments, differences in annoyance ratings can be assumed to not be moderated or even caused by performance effects (i.e., one was not annoyed because he/she could not perform well). Instead, the observed annoyance effects can be indeed attributed to the differing macrotemporal pattern of road traffic noise. In that context, it would be interesting to study the effects on noise annoyance in situations where also performance in (possible more difficult) cognitive tasks is affected by the macro-temporal pattern of road traffic noise. Strengths and Limitations A particular asset of the current study is that both, noise annoyance and cognitive performance, were mutually studied in two experiments to evaluate potential effects of road traffic noise comprehensively. While similar studies are available for background speech and music [39][40][41], studies involving road traffic noise to investigate such mutual effects are rare [16,17]. Besides, our design revealed that the associations of annoyance and performance with the acoustic characteristics (RQT or QTD) are quite different. The study also faces certain limitations. As is generally true for laboratory studies, the ecological validity is limited due to the laboratory setting and the rather limited number of participants. Further, inferring from short-term noise annoyance in the laboratory to longterm annoyance in the field still needs to be verified ( [69]), and inferring from cognitive performance tasks to long-term performance in office environments is similarly challenging. Also some specific limitations apply. Above all, adopting the design to allow for a more sensitive analysis of performance data, specifically aiming at the transitional phases between quiet and loud periods (see above), would be beneficial. Besides, varying the L Aeq , which is a decisive factor for road traffic noise annoyance (e.g., [45,68]) would add an important dimension to the outcomes. If the L Aeq was sufficiently high to substantially affect cognitive performance, one could also study the effect of reduced performance on (noise) annoyance. These limitations could be addressed and improved in future studies (cf. Section 5.4). Outlook Our experiment revealed that, for moderate sound exposure in an office environment, the macro-temporal pattern of road traffic noise affects annoyance. This was true although participants were not actively listening to the noise but were working on a cognitive task, and even though performance on that task was not systematically affected by the noise. Future research might test whether the association of the macro-temporal pattern of the road traffic noise scenarios with annoyance is different if participants actively listen to them (e.g., during relaxation in a mock garden environment). This could be studied in a focussed listening experiment, where only the sound to be subjectively evaluated is presented, without any cognitive task to be performed. Besides, follow-up experiments focusing more on the effects of road traffic noise scenarios on attentional functions might be set-up in such a way that the relevant trials in the cognitive task at the time of, or shortly after, the salient changes in the noise scenarios can be identified (i.e., non-self-paced trials or event based data logging). Then one could test more sensitively than in our experiments whether the transitions from traffic noise to quiet periods and back, and/or irregular breaks as unanticipated changes in the auditory background cause attentional capture. In the experiments presented here, the levels were as one might well find them in an office environment. However, people are also exposed to traffic noise in street cafés, on balconies and in front gardens, where the sound levels can be significantly higher. Also there, people spend longer time periods and concentrate on certain cognitive tasks, if they have to or wish to. Consequently, further unfocussed listening experiments similar to the experiments presented here would be desirable to study the effect of macrotemporal pattern on annoyance and cognitive performance under substantially higher sound exposure (e.g., L Aeq = 55-60 dB(A)). Such experiments could help further filling the gap in knowledge on the links between annoyance, performance and macro-temporal pattern of environmental sounds. Conclusions In unfocussed laboratory listening experiments, the associations of annoyance and cognitive performance with the macro-temporal pattern of relatively low-level road traffic noise situations were investigated in a mock office environment. In line with literature, annoyance decreased with increasing total duration of quiet periods. Also the distribution of the quiet times affected annoyance. Shorter but more regular breaks were found to be less annoying than longer but irregular breaks of identical total duration; a minimal necessary duration of noise breaks as proposed in literature could thus not be confirmed. Cognitive performance in an attention-based task, in contrast, did not systematically vary with the macro-temporal pattern of the situations. Thus, while the macro-temporal pattern of road traffic noise situations with moderate sound exposure seems playing a minor role for cognitive performance, it may still be important for annoyance of office staff. Conflicts of Interest: The authors declare no conflict of interest. Mark Brink works for the funding agency FOEN, but contributed to this study in a purely scientific way and at the request of the other co-authors. Appendix A Table A1. Correlation analysis: Scatterplots and Spearman's rank correlation coefficient (r s ) [60] of indicators for the macro-temporal pattern of the road traffic noise scenarios S0-S6 of experiment 1. ** p < 0.01. Figure A1. Results of a pilot experiment to this study (details see [44]): Mean reaction time (RT) with standard error bars, shown separately for congruent and incongruent trials, for four variations of the Stroop task: (i) shape test naming the shape of a geometric form with a word written within (shape-shape), (ii) shape test naming the written word within a geometric form instead of its form (shape-word), (iii) colour test naming the print colour instead of the semantics of the word (colourcolour), (iv) colour test naming the semantics instead of the print colour of the word (colour-word). Figure A1. Results of a pilot experiment to this study (details see [44]): Mean reaction time (RT) with standard error bars, shown separately for congruent and incongruent trials, for four variations of the Stroop task: (i) shape test naming the shape of a geometric form with a word written within (shape-shape), (ii) shape test naming the written word within a geometric form instead of its form (shape-word), (iii) colour test naming the print colour instead of the semantics of the word (colourcolour), (iv) colour test naming the semantics instead of the print colour of the word (colour-word).
16,496.2
2022-04-01T00:00:00.000
[ "Physics" ]
Personal , paternal , patriotic : The threefold sacrifice of Iphigenia in Euripides ’ Iphigenia in Aulis In the IA, Iphigenia accepts to be sacrificed. This voluntary sacrifice must be interpreted as a result of her threefold motivation: personal, love for life; paternal, love for her father Agamemnon, the leader of the Greek army which is about to sail to Troy; and patriotic, love for her country, the great Hellas, whose dignity and freedom Agamemnon and the army intend to defend. These three motives are interconnected and should not be considered separately. This is the principal Euripidean innovation with regard to the mythical and Aeshylean tradition of Iphigenia’s sacrifice. It allows us to reconsider the Aristotelian criticism concerning Iphigenia’s change of mind, and to restore the unity of her character. There is a literary and mostly Euripidean motif, self-sacrifice; a context, the imminent Trojan war; men and women aiming at the right thing to do according to their status in the right place and at the right time; a young man, Achilles, and a young girl, Iphigenia, who are supposed to be married; a chorus of strangers, women of Chalkis, visiting Aulis and assisting at the events.The last of the extant Euripidean plays provides reversal, emotion, "patriotic" speeches.But above all, it provokes pity and admiration, and raises many questions about the very value of life, death, sacrifice, about the willing or unwilling offer of one's self to the cause of the many.If there is https://doi.org/10.14195/2183-1718_68_3one play where all Euripidean themes are exposed in the clearest manner, this play is undoubtedly the IA. The majority of the interpretations of Iphigenia's sacrifice only focus on one main aspect concerning her motivation and volte face: the patriotic/ Panhellenic, the personal one (desire to be praised, to control her own destiny, to surpass the ordinary female standards etc.) or the "paternal" one.Our aim in the present article is to re-examine Iphigenia's sacrifice, in order to point out its threefold character and to study Euripides' reflection within the framework of a global quest for new standards of nobility. As a daughter and a young girl, it is quite natural for Iphigenia to be influenced by her parents' opinion; but it appears less natural that a girl prefers the paternal to the maternal motivation and arguments.As a maiden and a princess, Iphigenia aspires to the preservation of her high social status, and to the praise offered by her household and relations; these aspirations could be fulfilled through her marriage to Achilles; yet, like the other Euripidean maidens, she realizes that this traditional solution would not guarantee any happiness or glory.Iphigenia claims a better life, and therefore rejects the traditional female destiny; she accepts dying because she cannot bear the thought of a mediocre life.This is the "personal" aspect of her sacrifice.The third aspect, the more obvious one, is the patriotic or Panhellenic.As a Greek, Iphigenia really wants the Greek army to sail to Troy and win the war; but, as a woman, her only contribution to this war is to repeat her father's patriotic arguments, and to become a mouthpiece of his cause, which she completely embraces.Through her choice, she symbolically accompanies her father during his Trojan expedition.Her patriotic sacrifice is the only way to be a part of her father's plan. Our purpose is to examine whether or not Iphigenia, the last victim, reminds us of all the previous ones, embodying the traits of every other Euripidean victim in a unique character. Iphigenia's past Before Iphigenia appears on stage, what do we know about her? Almost everything, in other words almost nothing.No one is supposed to ignore her legendary past, which is a part of the epic tradition (Cypria 1 , 1 One can find a synopsis of the traditional version of the Cypria in Proclus, Chrest. 1, 135-43.López Férez 2014: 164-175 provides a comprehensive survey of Iphigenia's mythical past and its influence on Euripides, with an updated bibliography.Personal, paternal, patriotic: The threefold sacrifice of Iphigenia in Euripides' Iphigenia in Aulis Iliad2 ).There are at least two previous Iphigenia-plays, by Aeschylus and Sophocles.The latter seems to give a place to Ulysses, who is not on stage in the IA, in order to focus on Iphigenia's marriage and to present some thematic similarities with Philoctetes 3 .She is already known as the heroine of Euripides' IT, the priestess of Artemis in charge of the consecration of the Greek victims to be sacrificed according to the laws of the "barbaric" Tauris (IT 30-41).The traditional version of her myth includes her sacrifice to Artemis and substitution by a hind thanks to the goddess' intervention (e.g. in IT 25-30), although the Aeshylean version does not mention any substitution.In Aeschylus' Agamemnon, Iphigenia's sacrifice is referred to as a necessity (Ag.218); nevertheless, Agamemnon's words (δαΐξω "slay", μιαίνων παρθενοσφάγοισι ῥείθροις "stain… with streams of virgin blood", κακῶν "evil" Ag. 208-11) sound clearly as evidence of a criminal action: Agamemnon recognizes it, but on no account is he refusing this sacrifice.There is "no alternative" for him. 4Aeschylus' audience assists indirectly (through its description in the Parodos of the play) at this sacrifice: the focus is on horror and violence (228-47), instead of nobility and abnegation. The most important thing about this sacrifice is Artemis' wrath because of Agamemnon's fault (one version) or Artemis' demand because of Agamemnon's imprudent promise: IT 20-1, ὅ,τι γὰρ ἐνιαυτὸς τέκοι κάλλιστον ηὔξω φωσφόρῳ θύσειν θεᾷ, "you vowed to the light-bearing goddess that you would sacrifice the fairest thing the year brought forth" 5 , 54 Dina Bacalexi (a different version)6.What is new in the IA is the absence of reference either to the goddess' wrath, or to the king's guilt: what we do know is that Agamemnon has to sacrifice his daughter to Artemis according to Calchas' prophecy.If one ignores Iphigenia's legendary past, one also ignores why she must die.This Euripidean innovation minimizes the divine aspect of the sacrifice7 , in order to emphasize the human one.The Prologue of IA is a simple announcement of the prophecy, without any reference to explicit orders.The "traditional" elements are hardly recognizable.It is as if the Aeschylean necessity or violence have disappeared, as if the order to sacrifice Iphigenia were a mere invention of Calchas, not a divine punishment: "Calchas the prophet foretold that we must sacrifice Iphigenia, my daughter, to Artemis who dwells in this region: if we sacrificed her we would be able to sail and overthrow the Phrygians, but otherwise not". The verb σπείρω, Agamemnon's reference to family ties, is a usual term for descendants 9 ; but in this very context it would mean more than the simple fact that Agamemnon is Iphigenia's father: it can be interpreted as Personal, paternal, patriotic: The threefold sacrifice of Iphigenia in Euripides' Iphigenia in Aulis an expression of the very special, physical and indestructible bond between father and daughter.Iphigenia is Agamemnon's "seed".If he sacrifices her, there will probably be no other "seed" like her, even though he has two more children, including a boy, Orestes. One is also supposed to know Iphigenia's story among the Taurians.She relates her near sacrifice in the Prologue of the IT, lamenting on her "ill-starred fate" (δυσδαίμων δαίμων, 203-4), longing for revenge on Menelaus and Helen (354-8), and recalling Agamemnon's treachery (371), namely her marriage to Achilles, without blaming Artemis, because, according to her, "no god is wicked" (391).Almost all of those different versions of Iphigenia's legend are well known.Therefore, in the IA, an innovation is expected -one which is necessary, in order to stimulate the audience's interest.This innovation must be as powerful as the impact of the legend.Not a disruptive one, because the audience must recognize the legendary Iphigenia; yet Euripides must (re)create his Iphigenia giving her a "second life" on stage. The "personal" reasons to die. Before Iphigenia appears on stage, we know about her father's change of mind (the second letter to Clytemnestra, 107-9)10 and we see Menelaus expecting Iphigenia's arrival "from Argos to the army" (328).When he discovers that Iphigenia will probably never come to Aulis, he is furious and blames his brother's "unsteady mind" (334) and "devious thoughts" (332)11 .The heroine's arrival is also expected by the audience, so there is no actual suspense: Iphigenia is on the point of arriving at Aulis.Euripides now This announcement focuses on the admiring gaze of the assistance on Iphigenia (εἰς θέαν, ἴδωσι, περίβλεπτοι).She must be an elite victim, because she is presented as an impressive young girl 13 .What is at stake here is her εὐδαιμονία, the privilege of "fortune", i.e. of noble birth and happiness.Iphigenia has a role to assume in order to justify her κλέος.Perhaps is she prepared to be married or to be "consecrated" to Artemis.The use of the ritual term is ironic: the same ritual is performed either before a marriage or a sacrifice; Artemis as a virgin goddess protects young girls; Artemis as a huntress demands noble victims.The passage stresses the double perspective of being the best bride and the best victim at the same time14 . Personal, paternal, patriotic: The threefold sacrifice of Iphigenia in Euripides' Iphigenia in Aulis Another interesting point in these lines highlights the relationship between Agamemnon and Iphigenia: it is the first time we hear about his "longing" (πόθος) to see his daughter before his departure to the war.Echoing this desire of her father, Iphigenia will wish to accompany him to his journey to Troy: Εἴθ᾿ ἦν καλόν σοι κἄμ᾿ ἄγειν σύμπλουν ὁμοῦ (666) "How I wish it were proper for you to take me with you as a shipmate!"Iphigenia's desire to be always at the same place as her father and at the same time is emphasized by means of καί (σοι κἄμ᾿), of the prefix συν-(σύμπλουν) and of ὁμοῦ, the three terms in the same line: as if the noble marriage, a delightful prospect for every young girl of her social condition, were of less importance than her strong ties to her father.The "personal" and "paternal" aspects are interwoven. Another aspect of the desire is the erotic one.Are we to argue with Michelini that Eros is an instrumental motive for Iphigenia's self-sacrifice? 15When Michelini examines the relation between Iphigenia and her putative fiancé, Achilles, she refers to the maiden's "erotic longing for glory" in order to be "the bride of this archetypal hero" (i.e. the Achilles of the Iliad).Yet we have no textual evidence of "eroticization" of the Achilles-Iphigenia relation, only the ordinary reserve of a well-educated young girl towards her future husband (1340 "open the door, slaves, so that I may hide myself indoors"), and the shame when her father's lie about the marriage is discovered (1341, αἰσχύνομαι; 1342 αἰδῶ φέρει).Indeed the Achilles of the IA shares few common characteristics with the Homeric one.Rather than the "archetypal hero", in the IA we see above all a boaster, who pretends to be a free man and to use his spear "so far as it in [him] lies" (929-30), whose "name" will never allow Agamemnon to kill Iphigenia (947), who swears in front of Clytemnestra that "king Agamemnon shall not touch your daughter, no, not lay his fingertip on her robes" (950-1, emphasis on where the bride, Iphigenia, makes a free choice of her future husband (Achilles or Hades).The only traditional aspect here is the final choice, the sacrifice: it corresponds exactly to what her κύριος has planned for her.Although Rehm 1994 does not discuss specifically the IA, pointing to Foley's work on this play, it is worth noticing that the 5 th -cent.practices described (11-29) fit Iphigenia's "sacrificial" marriage perfectly. 15Michelini 2000: 51-53. the two negative particles οὐχ/οὐδ in the beginning of each line) and who does not exclude the use of violence for Iphigenia's sake: Τάχ᾿ εἴσεται σίδηρος, ὃς πρὶν ἐς Φρύγας ἐλθεῖν φόνου κιλῆσιν βαρβάρου χρανῶ, εἴ τίς με τὴν σὴν θυγατέρ᾿ ἐξαιρήσεται (970-3) "This sword will bear me witness: even before I get to Phrygia I shall stain it with barbarian blood if someone robs me of your daughter16 ". If Agamemnon does not yield to the supplication of his wife, Achilles wishes that Clytemnestra and therefore Iphigenia could have recourse to him (1015-6); if Clytemnestra fails, he will be here protecting her daughter (1028).Achilles is not interested in Iphigenia (he presents himself as a highly prized bridegroom, 958-9), but in his own glory. After being confronted with the army's violence, despite the fact that he tries again to preserve his heroic character or to stimulate Clytemnestra's admiration because he has risked being "stoned to death" (1350) 17 , Achilles finally abandons Iphigenia's defence: his only proposal is that Clytemnestra must "hold fast to" her daughter (1367), not let Ulysses "drag her away" (1365), even though the final line of this stichomythia, ἀλλὰ μὴν εἰς τοῦτό γ᾿ ἥξει (1368 "it will come to that") sounds like a confession of his powerlessness. Achilles is a "reasonable" character admitting his defeat.Euripides subverted the Homeric archetype: the real hero is not Achilles, but Iphigenia, and she is not in love with him, she imposes her choice on him and thus she becomes the leader of the action.In the Iphigenia-Achilles couple, the traditional male/female roles are reversed; but this is no evidence of "eroticization" of their relation.Furthermore, Iphigenia is addressing him directly (1418, σύ δ᾿, ὦ ξένε 18 ), giving him orders (μὴ θνῆσκε… μήδ᾿ Personal, paternal, patriotic: The threefold sacrifice of Iphigenia in Euripides' Iphigenia in Aulis ἀποκτείνῃς…/ἔα, 1419-20; "do not die… or kill…/but allow"), an unusual manner to express "erotic" desire or submission to a future husband.Neither Achilles who does not care for Iphigenia, nor Iphigenia, who only cares for her father, have anything to do with any aspect of erotic desire.The real couple in this play is Iphigenia and Agamemnon, not Iphigenia and Achilles.Marriage with Achilles (or marriage in general) can hardly be counted among Iphigenia's personal reasons to refuse or to accept sacrifice. It is interesting to compare Iphigenia's first arguments against the sacrifice (1211-52) with the ones she exposes l. 1368-1401, after her father's monologue (1255-75) justifying his decision to proceed to this sacrifice. Like her mother, for whom the sacrifice is a personal and a family affair (cf.1141; Clytemnestra regards it as a prejudice against her, instead of Iphigenia), Iphigenia alludes to family life, to her role as a daughter, and especially as the eldest and the most loving child (emphasis is put on πρώτη at the beginning of 1220 and 1221), to her marriage and to the joy of receiving her old father into her home, in order to "repay for the toil of [her] nurture" (1230) 19 .There is no allusion to public life, either to the army or to the expedition; she only alludes once to an "external" affair (i.e.neither private, nor familial), when she refuses any relation between her and the "marriage of Alexander and Helena" (1235-6).The sacrifice is limited to the nuclear family, the mother, the father and their children.There is no place for broader considerations. On the contrary, when she decides to die, "personal", "public" and "familial" motives are interwoven.She does not want to be isolated, "a single life" (1390), and regards herself as a part of the broader family of all the Greek (1385).Nevertheless, she aims at the preservation of her (nuclear) family, because she urges her mother "not to hate" Agamemnon, referred to firstly as a father, secondly as a husband (1454).After Clytemnestra's in French, […] means something between 'stranger' and 'friend'".In ancient Greece, "friendship" is a result of hospitality toward strangers, an "exchange" of hospitality.But this kind of friendship concerns only men, not women (or a man and a woman), and neither Achilles nor Iphigenia have ever been hosts.In translating "my friend", Stahl, in our opinion, expresses the state of the young girl's mind, rather than a real tie between Achilles and her. 19According to Stockert 1992: 544, an echo of Agamemnon's πολλὰ μοχθήσας πατήρ, "the father who has worked so hard" (690) and who must now give his daughter to her future husband.In both passages there is a reference to the girl's education by her father (not her mother): a supplementary reason for Iphigenia to be indebted to Agamemnon, not Clytemnestra. ' / violent charge against Agamemnon (1146-1208), Iphigenia's acceptance to be sacrificed could be interpreted as an attempt to reconcile her parents and thus to preserve her family. Concerning her complete submission to Artemis' will, (1395-6), it could be either an evidence of Iphigenia's piety and resignation (a mortal is not allowed to oppose a deity), or, in a quite ironical sense, highlight the oddity of a demand whose justification remains unknown. Like her mother later on in the play, Iphigenia appeals to morality and sentiments.She opposes her own good memory to her father's forgetfulness (1231-2), she tries to arouse his compassion for her mother's double travail (1234-5), and to revive his paternal feelings: a glance, a tender kiss ( 1238) is all that would remain after death. It is worth noticing that these considerations no longer appear in her acceptance discourse.Her "moral" concerns become less "egocentric", more general: she appeals to a "just plea" (1391), implying that her decision to die for the many is the only way to achieve justice. The final three lines of her supplication, 1250-2, are regarded as dubious.Kovacs is bracketing them.They have given rise to scholarly discussion about the question of whether Iphigenia is "mad" or not (μαίνεται) 20 .These lines are also supposed to be a statement against traditional heroism and belle mort, because of the opposition between κακῶς ζῆν and καλῶς θανεῖν.According to Jouan, these lines only intensify the dramatic tension and give more value to Iphigenia's change of mind 21 . Our purpose here is not to discuss bracketing or not: because of their dramatic value, these lines could probably have been an actor's interpolation in order to increase empathy between the public and the maiden.Yet what is important, especially in 1252, is the emphasis on death, θανεῖν, at the beginning and the end of the last line of Iphigenia's monologue: it reminds us of Iphigenia's fate, introducing the question of how Euripides will manage to transform the violent Aeschylean version of the sacrifice 20 Siegel 1980: 321, based on these lines, supposes that Iphigenia is "driven mad" when she chooses sacrifice instead of life.According to Funke 1964: 299, Iphigenia is "unconscious" of what a real choice means, so Aristoteles' criticism of her ἦθος ἀνώμαλον (Po.1454a, 26-33) is justified.See also Gödde 2011: 265-268 on Iphigenia's "madness" and on the "psychological" aspects of characters. 21Jouan 1983: 147. Personal, paternal, patriotic: The threefold sacrifice of Iphigenia in Euripides' Iphigenia in Aulis into a voluntary offer 22 ; how he will present this radical transformation of a young girl who loves life into a willing sacrificial victim who gives her life.Furthermore, the opposition between (bad) life and (good) death should be interpreted with regard to the particular context of this play: what does it mean to live or die, for a φιλοπάτωρ (639) girl like Iphigenia?Even when she is referring to her imaginary future, she is unable to view it as independent of her father's one.She seems anxious about the mysterious "sailing" she must undertake alone, separated from her father and mother (667-70), asks no further questions when her father forbids her to do so ( 671), but orders him to "hurry back" from war for her own sake (672) 23 .Life only makes sense if he is at her side. The "paternal" reasons to die. In this part we will argue that Iphigenia's "paternal" reasons are the same, no matter whether she refuses or accepts sacrifice.In both cases, she is the φιλοπάτωρ daughter who longs for her father's love, the one who takes her father's place in the play when Agamemnon is entangled in his own lies and tries to justify his horrible decision.Iphigenia becomes the real leader, symbolically of the army, actually of the tragic action 24 . Let us first examine Agamemnon's motives for engaging the Greek army into this war (and therefore for sacrificing Iphigenia) from three different viewpoints: his own, Menelaus' and Clytemnestra's. According to Agamemnon, in the Prologue (61-5), the reason of this "Panhellenic" war is the pact concluded between the suitors of Helen to 22 Aeschylus, Ag. 205-247.On violence and the opposition between Aeshylus' and Euripides' sacrificial narrative : Loraux 1985: 75-77; Crespo Alcalá 2002: 94-101 and 104-105; Durán López 2003: 76.Foley 1982: 176 does not believe that "the violent Aeschylean scenario" can be "fully transformed by individual gestures of pity and self-sacrifice".Yet violence remains in Euripides, death is omnipresent, but "pity and self-sacrifice" give a new sense to this violence, which becomes a human affair, a matter of choice. 23 Σπεῦδ᾿ ἐκ Φρυγῶν μοι, θέμενος εὖ τἀκεῖ, πάτερ.The 1st person particle μοι is omitted in the English translation (the French one "reviens-moi" is more precise), despite its importance: the return of Agamemnon is referred to as if it were a favor or a gift reserved for Iphigenia alone. 24Felson 2001: 33-34 and n.18 comments on φιλοπάτωρ in order to emphasize the special bond between Iphigenia and her father, which "excludes Clytemnestra" and establishes an exclusive relation between the father and the daughter, leaving no place for the mother.This would be a prelude to Iphigenia's "patriotic" speech (1375-1401). "make a military expedition" in order to help Helen's future husband, in case of rape, to get her back "by force of the arms".Agamemnon, Menelaus and the other Greek leaders have to keep their oath, whatever the cost in human lives. According to Menelaus, Agamemnon must remember his personal involvement in order to obtain the leadership of the Greek army, a real electoral campaign (337-42), and his longing for power and glory (ἀρχή, κλέος 357).Agamemnon was then "willing" (360) to sacrifice his daughter to his personal interests.After the arrival of Iphigenia at Aulis, Menelaus takes pity on his "desperate" brother (472), changes his mind (478) and advises him to "disband the expedition" (495) and save his daughter's life.But now Agamemnon presents a new motive for continuing the military process and therefore killing Iphigenia: the "necessity" according to him, (511), i.e. the "fear of the army" according to Menelaus (517) 25 .Neither Menelaus nor Agamemnon insists here on a "patriotic" core motive.The oath of Helen's suitors no more specifies the nationality of the enemy: the expedition should be made "whether it was a Greek or a barbarian" (65).Agamemnon exposes his patriotic arguments later on in the play (1255-75), when he has to explain why he abides by his decision despite Iphigenia's supplication and Clytemnestra's threats 26 . According to Clytemnestra, Agamemnon's motive is to help Menelaus to get Helen back (1168), a "tribute" to a "bad woman" (1169).Clytemnestra is opposing moral categories, "bad" and "good" (καλόν/κακῆς γυναικός 1168-9), and feelings, "love" and "hatred" (ἔχθιστα/φίλτατα 1170).She aims to prove that Agamemnon's decision will entail moral condemnation: how could he prefer the military leadership and expedition (1194-5) to the life of his own child?After Clytemnestra's interpretations of Agamemnon's motives to go to war, his arguments, despite his effort to be a responsible commander of the army and a faithful suitor of necessity, seem rather weak and unconvincing.What is at stake here is neither his personal glory, nor his piety (there is hardly any reference to religious motives here or elsewhere in the play), but his Personal, paternal, patriotic: The threefold sacrifice of Iphigenia in Euripides' Iphigenia in Aulis responsibility to lead a campaign against the "barbarians"27 .At the beginning of his monologue (1255-75), we notice that the patriotic argument is absent. Agamemnon is facing insurmountable odds: he must choose between two equally "terrible" things (1257-8, δεινῶς δ᾽ἔχει…/δεινῶς δὲ καὶ μή), and he is impressed by the army's irrational power (1263) which can be used against his own children if he does not respect his promise to sail to Troy (1267-8). The patriotic motive appears at 1271, too late to be regarded as Agamemnon's principal concern.Late though this motive arrives, the leader of the Greek army puts forth here for the first time the question of Greece as a whole (not only as the country of Helen and Menelaus), of the Hellenic superiority over the barbarians, of the Hellenic pride and the Hellenic freedom: "It is not Menelaus who has enslaved me, my daughter, […] it is Hellas.To her, I must sacrifice you, whether I will or no: she is my ruler.As far as it depends on you, my daughter, and on me, she must be free, and we Greeks must not have our wives forcibly abducted28 ". Agamemnon exposes here for the first time his patriotic duty, as well as his difficulty to adhere to this cause: for him, the army is powerful but "foolish" and incontrollable; even though he is its supreme leader, he seems to have no influence or authority on it, to be actually "enslaved" to his fear of the soldiers under his command.But if Agamemnon is unable to endorse the commander's charge, it is necessary to find another leader, real or symbolic, replacing Agamemnon and fulfilling his mission. Agamemnon leaves the stage after the end of this speech.He will never appear again.The way is now open for his "substitute" who shall be able to show a stronger will and a clearer commitment.Iphigenia's "paternal" motive sheds new light on what her father is (or is not) able to do: it is clear that his weakness will not lead the army to victory; it is also clear that no other leader (Menelaus or Achilles) is able to do so.Iphigenia will endorse the role of the leader because her father was previously meant to have it.As it is impossible for her to become a male leader, she has to die.Her "change of mind" (acceptance of the sacrifice) is in fact a "change of state": from maiden and daughter to army leader. The "patriotic" reasons to die Agamemnon's mourning, his changing decision (or "inconsistency") can be interpreted as a precursor of Iphigenia's reversal.The maiden turns from joy (due to her imminent marriage) to annihilation (due to the announcement of her imminent sacrifice), but later on she changes her mind and transforms a constraint into a free choice.The question of "consistency" or "inconsistency" becomes the central point of the dramatic action: a weak and rather inconsistent Agamemnon and a strong-minded and consistent Iphigenia, his exact opposite. Agamemnon's patriotic arguments prepare this so-called Iphigenia's reversal: her "patriotic" motivations join her "personal" and "paternal" ones, because even her father, who previously refused to sacrifice her considering that the sacrifice would be an undeserved gift to Menelaus and his "wicked wife" (396-9), now explains that it is his duty to protect his country and family and to prove the Hellenic superiority over the barbarians.Agamemnon's change of mind is a piece of the dramatic economy, because it prepares the public to see a new and different Iphigenia on stage.Although we cannot be sure that Hellas will no longer be under the barbarian threat, this sacrifice will guarantee the annihilation of the threat, at least temporarily. Iphigenia presents her new decision as a result of an intellectual process and invites her mother (and Achilles, who is watching the scene) to listen to her: Οἷα δ᾿ εἰσῆλθεν μ᾿, ἄκουσον, μῆτερ, ἐννοουμένην· κατθανεῖν μοι δέδοκται· τοῦτο δ᾽αὐτὸ βούλομαι εὐκλεῶς πρᾶξαι, παρεῖσα γ᾿ ἐκποδὼν τὸ δυσγενές (1374-6) Personal, paternal, patriotic: The threefold sacrifice of Iphigenia in Euripides' Iphigenia in Aulis "Hear, mother, the thoughts that have come to me as I pondered.I have decided to die: my only wish is to act nobly, clearing myself from all taint of baseness". Iphigenia no longer appeals to sentiments, or personal and familial ties.Her new reference is to her "inner reason" (ἐν-+νοῦς), an unusual feminine reference, but a usual one for a Euripidean woman.She then invites her mother to "consider" with her (1377) the validity of the forthcoming arguments in favour of a "patriotic" sacrifice.She thus imposes a new method of decision-making, based on mature reflection, not on divine orders or "necessity".Yet it is not considered proper for young girls to make decisions: in the family context, it is the father who decides; in the public context, city or army, it is the politicians or the military commanders.Agamemnon is absent, Achilles seems rather fatalist (Ulysses is too powerful, he has too many soldiers with him and would probably take Iphigenia away, 1360-9): because there is no commander equal to the task, Iphigenia, the commander's daughter, undertakes the defence of Hellas. Let us consider the historical context of the play29 .The IA is the last Euripidean play, probably remaining unfinished at the time of Euripides' death in the winter of 407-406 B.C., written in Macedonia, where the author migrated in 408 B.C.The play was performed posthumously in 405 B.C., only one year after the end of the Peloponnesian war (404 B.C.).At this very moment, a play with a violent political context in which a sacrifice, a violent act, is a means of salvation of the many, of the "Panhellenes", could offer a reason for hope, meagre though this hope may be.The adherence of the young girl to her father's patriotic cause would be a reason to believe that the "politics of love"30 are still valuable, the solidarity ties have not been completely destroyed. There is no evidence that Iphigenia's arguments are "empty words" 31 , that Euripides "hides" himself behind Iphigenia's character or that the play only reflects the author's political convictions 32 .Neither can we agree with the Aristotelian critic (Po.1454a, 30) that this Iphigenia (the patriot) "has no common points with the previous one" (the loving young girl unwilling to die) 33 .Euripides created Iphigenia as a character and wrote his play in the particular historical context of the end of the Peloponnesian war.But the IA is a tragedy, not a political discourse; Iphigenia is a fictional character, not an Athenian orator. Iphigenia's perception of the "Panhellenic" ideal is an issue discussed by some scholars 34 .Iphigenia is a princess confined to her parent's palace and waiting to be married; she is not supposed to be aware of the political or military context.Yet Iphigenia, in the play, is the most φιλοπάτωρ of all Clytemnestra's and Agamemnon's children, the one who wants to be always by the side of her father, and the one who has listened to his patriotic reasons to sacrifice her.Iphigenia's "Panhellenic" vision is influenced by her father's discourse: it is fairly normal, for a daughter like her.It is also an opportunity for Euripides to underline the difference between her and 32 Said 1984: 36 regards Iphigenia's patriotic speech as "slogans", a "heritage" of the Medic wars reused in order to transform an imperialistic war into a war for freedom.Funke 1964: 292, 295, 299 thinks that Iphigenia, being an inconsistent character, only repeats her father's arguments.Luschnig 1988: 108: "Euripides […] purposely used anomaly of construction and character as a dramaturgical device"; cf.91-110 for a comparison with plays "accused of inconsistency" (Medea, Hecabe, Heracles, Heraclidae). 34 According to Bonnechere 2009: 210, the Panhellenic cause is "meaningless".The value of Iphigenia's perception of "Greece" as an ideal worth sacrificing herself for is under question.Siegel 1980: 315 opposes her "self-delusion" (Iphigenia thinks that "Greece" is a noble cause) and the reality of "ignoble causes and forces".For O'Connor Visser 1987: 123, the Panhellenic ideal is a result of Iphigenia's and Achilles' first meeting, when she "realizes that the whole Hellas is watching her and that so much depends on her".Foley 1985: 78 establishes a parallel between marriage and sacrifice, both Panhellenic rituals.Michelini 1999-2000: 55-56 quotes Isocrates Hel.67, Paneg.181, and Aeschines 3, 122, insisting on the violence of the Panhellenic ideas.On the concept of "Panhellenism", the term Πανέλληνες, tragedy as a "Panhellenic" genre, and Panhellenism in the IA : Rosenbloom 2011: 353-361 and 372-379.The comparison between the Panhellenic ideal of Iphigenia in the IA and the "longing for Hellas" and "restoration of the Hellenic identity" of Iphigenia in the IT shows "the fruitfulness of Panhellenic themes as a source of emotional engagement".Cf. also Michelakis 2006: 76-78.Personal, paternal, patriotic: The threefold sacrifice of Iphigenia in Euripides' Iphigenia in Aulis her mother, who is not interested in "politics" at all35 .But the daughter's arguments are more solid than her father's. The reference to freedom (1383) echoes Agamemnon's "she [i.e.Hellas] must be free" (1272-3).The difference is that Agamemnon is "enslaved" (1269) to the freedom of Greece, an oxymoron (how can one be a slave of freedom?), while Iphigenia embodies the expectations of all Greece (1378): her free choice guarantees Greek freedom. The reference to the salvation of Greek women (1380-3) and of Greece in general (1420) echoes Agamemnon's "we Greeks must not have our wives forcibly abducted by the barbarians" (1274-5).The difference is that Agamemnon only cares for women's abduction, while Iphigenia is presenting herself as the saviour of all Greece, including women.The salvation verb ῥύσομαι (1383) must be interpreted as a freedom term connected with ἠλευθέρωσα in order to intensify Iphigenia's patriotism37 . Iphigenia's war is not so different from Agamemnon's: it is a war of conquest.The difference is that Agamemnon's manly duty, as of all Greek men, is to sail to Troy, to make real war and probably to die in it.Iphigenia cannot participate in this war or accompany her father to Troy.By accepting 68 Dina Bacalexi the sacrifice, she becomes a part of the κοινόν of Greece, and of the Greek army.An unexpected glory for a woman: μυρίοι μὲν ἄνδρες ἀσπίσιν πεφαργμένοι, μυρίοι δ᾿ ἔρετμ᾿ ἔχοντες […] δρᾶν τι τολμήσουσιν ἐχθροὺς χὑπὲρ Ἑλλάδος θανεῖν, ἡ δὲ ψυχὴ μι᾿ οὖσα πάντα κωλύσει τάδε; (1387-90) "Countless hoplites and countless rowers will dare […] to fight bravely against the enemy and die on behalf of Hellas: shall my single life stand in the way to all this?"This comparison between a young sacrificial victim and the soldiers who defend their homeland has appeared previously in Euripides' Phoenician women, where Menoeceus offers his life to save Thebes (Ph.997-1014), comparing his sacrifice with the one of the Theban soldiers.Like Iphigenia, Menoeceus has patriotic motives.Like Iphigenia, he aims to participate in the war, but like her he is too young to be a soldier.His sacrifice changes him into a combatant.Her sacrifice changes her into a Greek soldier. But there are differences between those patriotic sacrifices: firstly, Menoeceus, like every man, is destined to be a soldier at any rate, while Iphigenia becomes a "soldier" by means of her sacrifice; secondly, Menoeceus disobeys his father Creon, the king of Thebes, who refuses to sacrifice his son to his homeland, while Iphigenia obeys her father Agamemnon and dies for their common homeland, for their common war 38 . The patriotic and the freedom themes are also connected with Greek superiority: Βαρβάρων δ᾽Ἕλληνας ἄρχειν εἰκός, ἀλλ᾿ οὐ βαρβάρους μῆτερ, Ἑλλήνων· τὸ μὲν γὰρ δοῦλον, οἱ δ᾿ ἐλεύθεροι (1400-1) 38 Stockert 1992: 34 examines the relation between youth and (self) sacrifice as a mark of μεγαλοψυχία (high mind).He refers to Aristotle (Rh.1389a): young people are "ambitious for honor", "ambitious for victory", "good-minded", "confiding", "of good hope", and μεγαλόψυχοι "high-minded"; "in their actions, they prefer the good to the useful".But they are also impulsive, acting under the influence of passion rather than reason: Iphigenia's sacrifice, according to this interpretation, is the result of an impulse of her heart, as well as of rational reflection and support.In our opinion, the "impulse of her heart" corresponds to her love for her father. Personal, paternal, patriotic: The threefold sacrifice of Iphigenia in Euripides' Iphigenia in Aulis "Greeks, mother, must rule over barbarians, not barbarians over Greeks: the one sort are slaves, but the other are free men". Iphigenia praises Greek superiority but she does not intend to humiliate the barbarians.She broadens her father's argument concerning the "abduction of Greek wives" (1265, 1275) and refers to an ordinary Athenian reality: most barbarians are slaves, so it would be normal for a Greek princess to present them as such 39 .This presentation of the barbarians is undoubtedly a cliché; nevertheless, there is no reason to minimize its importance in the whole patriotic framework, which is common to Iphigenia and her father.If this argument were only an "ironic" one 40 , there would be no place for any term related to "intellectual" activity, like the ones Iphigenia uses in the beginning of her speech: Iphigenia really means what she says, even though she might be influenced, like any Greek, by the idea of Greek superiority, a usual pattern at the time of the play. The culmination of Iphigenia's patriotic arguments is "you bore me for all the Greeks in common, not for yourself alone ( 1386) 41 .This statement echoes Agamemnon regarding himself as the "ruler" of Greece (1271-2).Iphigenia believes that she is a part of a whole while her mother is only a (selfish) member of her own family.The opposition "all the Greeks"/"alone", first/last word of the line, emphasizes the distance between the mother and the daughter. The ll. 1393-4, "better to save the life of a single man than ten thousand women", sound rather odd in a speech where a woman offers her life in order to restore the dignity of her country and to guarantee Hellenic superiority over the barbarians.Iphigenia explains why Achilles should not risk a violent confrontation with the Greek army for her sake."A single man" more important than "a thousand women" is a cliché, but it fits in with the war context of Iphigenia's speech and of the entire play.It could also be an echo of 1169 ("a bad woman", i.e.Helen): Iphigenia refuses to be a woman for whose sake so many men jeopardize their life.Or is it a simple reference to this very man, Achilles, her putative fiancé, whose life she aims to preserve?In this case, there is an additional reason to admire her altruism.Hall 1989: 196-197. 40 Said 1984: 36. 41 Sébillotte Cuchet 2006: 286-287 examines the "duty of the Greek mothers" to give their children to Greece.The best example is Praxithea in Euripides' Erechtheus. 42One can find an interesting interpretation of these lines in Stockert 1992: 589. Three Iphigenias in one or The unity of the character Let us re-examine the threefold motivation of Iphigenia, in order to answer the Aristotelian critic that the "suppliant Iphigenia has nothing to do with her later character". Iphigenia is a young girl who loves life, but, above all, she loves her father.She first wants him to "stay at home with his children" (656), then wishes to accompany him (666) and finally, when he exposes the patriotic/ Panhellenic reasons for the Trojan expedition and her sacrifice, she accepts it.Is her life worth living without her father? Iphigenia is a princess who will be married to "such a man", Achilles (711), her father will accomplish almost all the wedding rites, and this is the reason why he asks Clytemnestra to return to Argos (719-36).But for Iphigenia her future life as a married woman is hardly conceivable without her father (1228-30).Therefore, what is important for her is not marriage, but their common future. Agamemnon is not an ordinary father; he is the leader of a great army.Iphigenia is impressed by the powerful and irresistible army (1338); but she is mostly concerned about her father's responsibility as a leader of this "throng" (1338) and wants to be the one (ψυχὴ μία 1390) who leads those "countless men" (μυρίοι ἄνδρες 1387) to victory 43 .She has not forgotten marriage or future life, she only realizes that it would be impossible for her to live if Greece were enslaved or her father humiliated.She thus makes an exchange in order to preserve glory: "that for me will be my long-lived memorial, that will be my children, my marriage, my good name" (1398-9). Iphigenia changes her mind and accepts to give her life.That is the reason why she is regarded as "inconsistent".Yet in this play she is not the only character who changes their mind.Agamemnon and Menelaus change too, even Achilles changes after Iphigenia's decision.They are not "inconsistent", because these changes are a piece of the dramatic economy: 43 Siegel 1980: 311: Iphigenia realizes that "death is impossible to avoid since the force of the army is irresistible.[…] one of Euripides' purposes here is to explore the psychological result of violent, unreasonable and overwhelming political pressure on the mind of an innocent and naïve youth, whose will and natural desires run counter to the needs of the state".Iphigenia might indeed be impressed by the violence of the army, but the "needs of the state" are those of her father and homeland, so, if we take into account his explanation of 1255-75, there is no contradiction between Agamemnon's point of view and Iphigenia's monologue (1368-1401). Personal, paternal, patriotic: The threefold sacrifice of Iphigenia in Euripides' Iphigenia in Aulis the public should be prepared to see an innovative version of the well-known legend.Euripides gradually adapts the main characters to this new legend.Iphigenia, like the other Euripidean victims, gives her life because she loves this life.Otherwise, her sacrifice has no value at all.Concerning Iphigenia's real or artificial change, and her free or constrained choice, first of all we think that there is no reason in the play to doubt the sincerity of her offer, since it continues Agamemnon's last arguments of 1255-75, and corresponds to the sacrificial victim's longing for posthumous glory.Neither is there any reason to introduce a constraint choice: Iphigenia is not Polyxena, Hecuba's daughter; she is not a slave but a free princess, her palace has not been destroyed, and, even if Agamemnon dies in war, her mother, brother and sisters will still remain alive44 .There are indeed similarities between Iphigenia and Polyxena: their sacrifice is the demand of a powerful army (and deity), Ulysses plays an important role in this army, the two mothers, Clytemnestra and Hecuba, are unwilling to yield to the "necessity" of sacrificing their daughter.Yet in the Hecuba, Polyxena is a captive: her future would not be a princess' but a slave's life; she probably would be "purchased" by a "cruel master" (Hec.359-60) and married to a "purchased slave" (Hec.365-6) instead of the royal husband she deserves.Iphigenia is a free woman: there is no risk of losing her freedom or being married to a man of inferior social status.And hence freedom or marriage means less to her than paternal love or Hellenic pride.This is a personal, paternal, and patriotic choice, but a completely conscious one and a completely free one.Euripides emphasizes here the absurdity of war: it annihilates love for a father, a family or a country, in the name of love. Before leaving the stage, Iphigenia, like a seer, prophesizes her future: she will be "saved" and her mother will "be glorious" (εὐμλεής) because of this sacrifice (1440); no other "grave will be raised" for her (1442) than "the goddess' altar" (1444); she will be honoured as a "benefactor of Hellas" (1446).That is the reason why she presents demands to her mother, as if now Iphigenia were the commander to whom Clytemnestra shall obey (1460).A parallel can be stressed with Alcestis' unusual (or extravagant?)demands to Admetus in the Alcestis, namely not to remarry Personal, paternal, patriotic: The threefold sacrifice of Iphigenia in Euripides' Iphigenia in Aulis (1552-60).Her last words, reported by the messenger, add nothing to her previous argumentation and change nothing concerning her previous motivation.Furthermore, we can imagine that the impact on the public does not really change because of Iphigenia's substitution by a hind (1585-95). What matters for one who knows the mythical Iphigenia and then sees her on stage is the capacity of the maiden to offer a solution to a major crisis: the one of the army longing for blood and war, the one of Agamemnon seeking to assume his duty as a military leader but remaining torn between this duty and his weak will, and, finally, the one of Greece, the "homeland" of freedom, on the threshold of a war of conquest.In order to accomplish her mission, Iphigenia must disappear qua Iphigenia, and be reborn as a "soldier of duty" 50 .Iphigenia turned the sacrifice from a divine (i.e.irrational) demand to a human (i.e.rational) choice.Euripides' renewal of Iphigenia's character would not be enough to delete the ancient bloody myth of the Atreides whose issue is well known: Agamemnon's murder by Clytemnestra, Oreste's and Electra's revenge, the Erinyes tracking down Orestes, his final release after Athena's intervention, Iphigenia's exile among the Taurians and her longing to return home, her nostalgia.Nevertheless, the new Iphigenia is a successful one, because of the new elements introduced into the ancient legend.It is not a coincidence that this play has given inspiration to later imitations or transformations (e.g. the homonymous tragedy of Racine, or the film by Michalis Kakogiannis), and that it is one of the most performed Euripidean tragedies nowadays in Greece or elsewhere 51 . Certainly, the ancient spectator's reactions remain unknown so one can only wonder how different they would be from ours.Despite Aristotle's negative judgement about Iphigenia's character as an example of unexplained change of mind, it would be hardly believable that the ancient public 50 Or as an "artist", following Luschnig 1988: 126-127. 51Michelakis 2006: 105-129 discusses the reception of the play from Antiquity until now, including the history of the text and a review of the main scholarly readings (generic, historical, cultural, social, or ideological).He notices that "the level of sophistication of the debate provides valuable insights into the world of IA that 'straight' readings […] inevitably miss" (119).We particularly recommend Michelakis' appreciation on Kakogiannis' Iphigenia (127-129); we would only like to add the director's reference to Cyprus: "my name is Michalis Kakogiannis; my country, Cyprus".This opening statement of the film puts forward the "pioneering reading" of the play connected with the history of the Eastern Mediterranean. would not have felt this "οἰκεία ἡδονή" 52 that was the result of every tragic performance and is to be associated with tragic κάθαρσις.Iphigenia is not an "ordinary" sacrificial victim; but are there any "ordinary" victims in Euripides?Alcestis, demanding Admetus' eternal mourning and widowhood, Polyxena, claming her superiority over the Greek leaders, Macaria in the Heraclides, dying for Athens, which is not her own but is the only one to offer hospitality to her family, Menoeceus, the young prince choosing death rather than life and protection as a host of one of his father's allies: none of them is an "ordinary" victim accomplishing a ritual or obeying an order.All of them operate a real metamorphosis of the sacrifice motif, they appropriate the theme but they present it as different from what is already known by means of their legend.Iphigenia's Euripidean legend is not an exception: the poet creates a new Iphigenia, merging all the previous ones.This is the reason for the play's success from Antiquity until now. Bibliography Dina Bacalexi introduces his first innovation: a praise of the princess by the Messenger, where almost every word is deliberately ambiguous:
10,910.2
2016-12-29T00:00:00.000
[ "Philosophy", "History" ]
Choices in the 11-20 Game: The Role of Risk Aversion Arad and Rubinstein (2012, AER) proposed the 11-20 money request game as an alternative to the P beauty contest game for measuring the depth of thinking. In this paper, we find that choices in the 11-20 game are confounded with risk aversion; hence, the depth of thinking measured is confounded with risk aversion. We also theoretically show that risk aversion will induce players to avoid choosing low numbers in the game. Further, we show that choices in the P beauty contest game are not correlated with risk aversion. Introduction Game theory models often make assumptions about the rationality of decision makers.One common form of rationality assumed is that decision makers have the ability to perform strategic reasoning such as iterative reasoning.In fact, most equilibrium concepts rely on the assumption that decision makers have an infinite reasoning capacity.Thus, it is important to check whether decision makers indeed exhibit such a degree of rationality in reality.The idea that individuals may exhibit a finite degree of rationality can be at least traced back to the newspaper game proposed by Keynes (1936) [1].Based on the newspaper game, Nagel (1995) [2] proposed the P beauty contest game as a method of experimentally measuring the depth of thinking.Since then, the P beauty contest game has become the main vehicle for measuring the depths of thinking of decision makers. The P beauty contest game is structured as follows: Each player is asked to choose a number between 0 and 100 (0 and 100 inclusive).The player with the chosen number that is the closest to p times the average guess wins a prize. When p < 1, the game can be solved using iterated elimination of weakly dominated strategies, which can be seen as follows.First, eliminate any strategies larger than p × 100, then eliminate those that are greater than p 2 × 100, and so on.The unique equilibrium is to guess 0. More recently, Arad and Rubinstein (2012) [3] proposed the 11-20 money request game as a more effective alternative to the P beauty contest game for measuring the depth of thinking.Their main critiques of the P beauty contest game are that (1) it is difficult to understand and (2) the choice of reference point for counting the level of thinking is often somewhat arbitrary.They argue that the 11-20 game does not suffer from these shortcomings. The 11-20 game is as follows: "You and another player are playing a game in which each player requests an amount of money.The amount must be (an integer) between 11 and 20 shekels.Each player will receive the amount that he requests.A player will receive an additional amount of 20 shekels if he asks for exactly one shekel less than the other player.What amount of money would you request?" Arad and Rubinstein (2012) [3] run the 11-20 game experiment and analyze the choice patterns.However, they do not run the P beauty contest game experiment for purposes of comparison.Thus, their argument regarding the advantage of the 11-20 game is mainly based on logical argument rather than empirical evidence. Although the 11-20 game has its own advantages over the P beauty contest game, in our view, it may also come with an additional cost: an individual's choice may be correlated with his level of risk aversion.The intuition is that an individual may choose a high number, say, 19, rather than a low number, say, 16, not because he is less sophisticated at strategic reasoning (i.e., lower level) but because he is more risk-averse.In this paper, we theoretically show that risk aversion will induce players to avoid choosing low numbers in the game.We experimentally test the hypothesis.The important implication of this hypothesis is that, if risk aversion is the main driving force of the subjects' choices in the 11-20 game, then we can no longer conclude that someone who chooses 19 is less sophisticated than another person who chooses 16.Thus, it is important to empirically verify whether the choices in the 11-20 game are correlated with risk aversion.In other words, whether choices in the 11-20 game are correlated with risk attitudes is our main research question. We conduct experiments in which the subjects play both games.After the experiment, the subjects complete a questionnaire on how they made their choices under the P beauty contest game and the 11-20 game.We also elicit their risk attitudes using the Holt and Laury (2002) [4] task. Our experiment design has several attractive features.First, we elicit the subjects' risk attitudes using the Holt and Laury (2002) [4] task.With this information in hand, we can test our main hypothesis that, in the 11-20 game, the subjects' choices are mainly influenced by risk aversion rather than strategic reasoning and that, in the P beauty contest game, the subjects' choices are not influenced by risk aversion. Second, in our experiment, subjects play both games.Having the subjects' choices in both games allows us to examine whether there is any correlation between their choices in these two games and to check for any systematic patterns. Our experimental results largely support our hypotheses.In particular, the subjects' choices in the 11-20 game are correlated with risk aversion.Risk-averse subjects are much more likely to choose high numbers (i.e., exhibiting low depths of thinking if one literally uses the chosen number to infer the level of strategic reasoning) in the 11-20 game.The key message is that the number chosen in the 11-20 game may not effectively reflect an individual's depth of thinking but rather the individual's risk attitude.On the other hand, we find that choices in the P beauty contest game are not correlated with risk aversion. Related Literature In a recent experimental study on the 11-20 game by Goeree, Louis, and Zhang (2018) [5], the authors found that the choices made in the 11-20 game can be best explained by the idea that subjects make mistakes.They conducted the 11-20 game and also variants of the game.They estimated the 11-20 game using the noisy introspection model developed by Goeree and Holt (2004) [6].They found that "data from these additional treatments clearly refute the level-k model, which predicts no better than the Nash equilibrium in these games".Note that they assume players to be risk-neutral. Our study complements the study by Goeree, Louis, and Zhang (2018) [5] by showing that subjects' aversion to choosing low numbers can be potentially explained by risk aversion. Theoretical Analysis In our experiment, the P beauty contest game and the 11-20 game are structured as follows: P beauty contest game: Each player is asked to choose a number between 0 and 100 (0 and 100 inclusive, up to two decimal places).The player with the chosen number being closest to 0.7 times the average guess wins a prize of RMB 50.If two or more players win, the winner will be randomly chosen. 11-20 game: Every player writes down a number that must be between 11 and 20 (11 and 20 inclusive) and also be an integer.Every player is matched with another player to form a pair.The payoff is determined as follows: First, each player in each pair receives the amount equal to the number that he/she specified.Second, in each pair, the player whose number is exactly one less than the other player receives an additional amount of RMB 20. We next study the theoretical prediction of the equilibrium behavior of the two games. For the P beauty contest game, it can be solved using iterated elimination of weakly dominated strategies, which can be seen as follows.First, eliminate any strategies larger than 0.7 × 100.Then, eliminate those greater than 0.7 2 × 100, and so on.The unique equilibrium is to choose 0 for every player. For the 11-20 game, there is no pure strategy Nash equilibrium, but there is a unique (symmetric) mixed strategy Nash equilibrium (see Arad and Rubinstein (2012) [3]).According to Arad and Rubinstein (2012) [3], in the unique mixed strategy Nash equilibrium the numbers 20, 19, 18, 17, 16, and 15 are chosen with probability 5%, 10%, 15%, 20%, 25%, and 25%, respectively.The experimental results reported by Arad and Rubinstein (2012) [3] deviate considerably from the mixed strategy Nash equilibrium in the sense that there seems to be a shift of the distribution of choices towards high numbers (see also Table 1).However, the theoretical prediction of Arad and Rubinstein (2012) [3] is based on the assumption that players are risk neutral.What if players are risk-averse?Similar to the risk-neutral case analyzed in Arad and Rubinstein (2012) [3], when players are risk-averse, there is no pure strategy Nash equilibrium.However, there is a symmetric mixed strategy Nash equilibrium in the game, in which players choose large numbers with larger probability than the equilibrium of the risk-neutral case.Indeed, in our experiment, we find that 79 percent of the subjects are risk-averse.This may explain why the low numbers are chosen much less frequently than the equilibrium prediction in Arad and Rubinstein (2012) [3].Alaoui and Penta (2015) [7] present a model in which the player's depth of thinking is endogenously determined.In their approach, individuals act as if they follow a cost-benefit analysis.Our approach is related to their approach in the sense that players face a trade-off over whether to forego a higher fixed payoff (cost) for the possibility of obtaining the reward (benefit).More particularly, the mixed strategy Nash equilibrium for the case where players are risk averse is analyzed as follows.Suppose that both players' utility functions are U(x).Observe that, in any symmetric mixed strategy Nash equilibrium of the game, the highest number that is chosen with a positive probability must be 20 (otherwise, a player can assign 20 with probability one and obtain a higher payoff).However, 20 cannot be chosen with probability one because, otherwise, a player will obtain a higher payoff by deviating to 19.Let p 20 denote the probability that 20 is chosen by a player.Thus, p 20 must be such that the player's opponent is indifferent to choosing between 20 and 19.That is, . We say that a utility function V is more risk-averse than U if there exists a strictly concave and increasing function k such that for any x 1 < x 2 < x 3 (refer to Figure 1, where V is more risk-averse than U and, for purposes of illustration, the two utility functions are normalized such that V(x 1 ) = U(x 1 ) and V(x 3 ) = U(x 3 )).This implies that, when the utility function U becomes more risk-averse, the ratio U(39)−U(19) will increase, meaning that p 20 will increase.Actually, it can be shown that, as U becomes more risk-averse, for any 11 ≤ x ≤ 20, the probability that a number that is equal to or greater than x is chosen will increase. In particular, we have the following result. Proposition 1: Consider the symmetric mixed strategy Nash equilibrium of the 11-20 game.For any 11 ≤ x ≤ 20, the probability that a player chooses a number that is greater than or equal to x when players are risk-averse is larger than the corresponding probability when players are risk-neutral.  " " Hitherto, we have assumed that subjects have the same utility functions and thus subjects have the same risk aversion levels.We may also allow heterogeneity of risk aversion, and consider the Bayesian (pure-strategy) Nash equilibrium of the game.Suppose, for example, the players' utilities are u = x 1−r /(1 − r), where each player's risk aversion level r is the player's private information.Assume that each player's r is drawn from a uniform distribution on [0, 2], which is common knowledge.It can be verified that the equilibrium is that the player with 1.815 < r < 2 will choose 20; the player with 1.4708 < r < 1.815 will choose 19; the player with 0.9933 < r < 1.4708 will choose 18; the player with 0.4051 < r < 0.9933 will choose 17; and the player with 0 < r < 0.4051 will choose 16 (see the Appendix for a proof).Thus, the more risk-averse the player is, the more likely the higher number will be chosen. 2The choice probabilities of 20, 19, 18, 17, and 16 are roughly 9%, 17%, 24%, 29.5%, and 20.5%, respectively. Experimental Design We conducted four sessions, and subjects participated in both the P beauty contest and the 11-20 game.A total of 96 subjects (24 subjects in each of the four sessions) participated in the experiment.The subjects were randomly recruited undergraduate students from a major university in Shanghai.In two of the sessions, the P beauty contest game was run before the 11-20 game, and vice versa for the other two sessions.There was no feedback between these two games.After the subjects completed the games and before the outcomes were revealed, they completed a questionnaire that asked them to specify how they chose the numbers in these two games.Their risk attitudes were also elicited using the Holt and Laury (2002) [4] procedure after they played the two games.Other demographic information such as gender, blood type, and horoscope sign was also collected. In the 11-20 game, a subject's payment is the sum of the following two parts: an amount that is equal to the number he/she chooses and a reward of RMB 20 if the number chosen is 1 less than the number chosen by the matched player.In the P beauty contest game, a subject's payment is RMB 50 if his chosen number is closest to 0.7 times the average guess, and is zero otherwise.In addition to the payments from the games, the subjects also received a participation fee of RMB 5. 2 Unfortunately, this result cannot be generalized to the general case that allows arbitrary utility functions or allows arbitrary distribution of the risk aversion level.For example, for the CARA utility, it can be verified that the equilibrium thresholds may not be monotonic. Measurement of the Depth of Thinking The depth of thinking in the P beauty contest game can be estimated as follows.A player who submitted a number larger than 70 is classified as level 0. In general, a level n player submits a number in the range of (0.7 n+1 100, 0.7 n 100]. 3 In the 11-20 game, Arad and Rubinstein (2012) [3] use the following method to measure the depth of thinking in the 11-20 game. A player who writes down 20 is classified as level 0, and a player who writes down 19, which is the best response to 20, is classified as level 1, and so on.In general, a player who submitted the number 20-x is classified as level x, where 0 ≤ x ≤ 9 and x is an integer. Note that one may have an objection to Arad and Rubinstein's (2012) [3] measurement of the depth of thinking in the 11-20 game.The reason is that, in the 11-20 game, although there is no pure strategy Nash equilibrium, there is a mixed strategy Nash equilibrium, which means that it is unclear to infer the depth of thinking of the players because they may be playing the mixed strategy Nash equilibrium.In this sense, one may also argue that the 11-20 game is even more complicated than the P beauty contest game.In the current paper, we acknowledge this view.However, we use Arad and Rubinstein's (2012) [3] method of measuring so that we can compare our findings with those of Arad and Rubinstein (2012) [3] and other related studies. Panel A of Figure 2 reports the relative frequencies of the chosen numbers in the P beauty contest game, and Panel B of Figure 2 reports the inferred depth of thinking.Figure 2 reports the distribution of the chosen numbers in the 11-20 game. Figure 3 reports the distribution of the inferred depth of thinking in the 11-20 game.We can observe that the proportions of choices of low numbers (15 and 16) in our paper are 4% and 1%, respectively, which is much lower than the predicted proportion of 25% in the mixed strategy equilibrium.Note that the equilibrium proportions are calculated based on the assumption that the subjects are risk neutral. 3 In the literature, an alternative method for classifying the depth of thinking is to use 50 as a reference point for level 0. We do not use 50 as the reference point because doing so would require dropping data points above 50.Nevertheless, our result remains qualitatively the same, and significant, if we use 50 as the reference point. Risk aversion and choice in the 11-20 Game We hypothesize that choices made in the 11-20 game are highly influenced by individuals' risk attitudes.More specifically, we hypothesize that the more risk-averse the individuals are, the more likely it is that they will choose high numbers such as 18, 19 and 20. Table A2 of the appendix reports the distribution of choices in the Holt and Laury task.We compared the proportion of highly risk-averse subjects among the subjects who chose the high numbers of 18, 19, and 20 with those who chose the low numbers (i.e., 17, 16, 15, 14, 13, 12, and 11). 4 We found that 89 percent of the subjects in the high number group are highly risk-averse, which is higher than the 63 percent observed in the low number group.The difference in proportion is significant, with a p-value equal to 0.03, based on the two-sample test of proportions. Panel A of Figure A1 reports the distribution of the depth of thinking in the 11-20 game conditional on low risk aversion and high risk aversion.Evidently, the two groups have different distributions; the high risk aversion group is more concentrated on low levels.In particular, it is found that the two distributions are significantly different from each other, with a p-value equal to 0.05, based on the Mann-Whitney test. Column 2 of Table 2 reports the probit regression, where the dependent variable is low depth of thinking in the 11-20 game.The subjects are classified as having a low depth of thinking if they chose high numbers (i.e., 20, 19, and 18) in the game.The estimated coefficients represent the marginal impacts of the independent variables on the probability of exhibiting a low level of thinking.It is found that high risk aversion increases the probability of a low depth of thinking by 27 percent.Hence, our hypothesis is supported.That is, the inferred depth of thinking in the 11-20 game is biased by risk aversion.4 Highly risk-averse subjects are defined as those who switch from gamble A to gamble B in choice 8 or later (i.e., the subject has chosen 7 safe choices (gamble A)).Our design very closely follows that of Holt and Laury (2002) [4].This group of subjects is also described as very risk averse by Holt and Laury (2002) [4].Table A1 (online appendix) reports the expected value of the gambles, assuming that the subjects take the objective probability as given.We can observe that, if a subject is risk neutral, then he should switch from gamble A to gamble B starting with choice 5. Thus, an individual who switched to gamble B at choice 8 or later must be highly risk averse.Holt and Laury (2002) [3] estimate the coefficient of relative risk aversion of their subjects using the utility function u(x) = x1 − r/(1−r) for x > 0. It is found that the coefficient of relative risk aversion increases with the number of safe choices.For example, when the subject switched at choice 8, the implied range of relative risk aversion is 0.68 < r < 0.97; thus, they classify the subject as "very risk averse".Column 4 of Table 2 reports the ordinary least squares (OLS) regression, where the dependent variable is the depth of thinking in the 11-20 game and the independent variable is risk premium 5 . 5It is found that the depth of thinking is negatively correlated with risk premium.The result also suggests that there is a non-linear effect on the relationship between risk premium and depth of thinking in the 11-20 game.In particular, the higher is the risk aversion, the larger is the marginal effect. Columns 1 and 3 of Table 2 report the same set of regressions for the P beauty contest game.It is confirmed that the depth of thinking in the P beauty contest game is not correlated with risk aversion. In summary, the choices in the 11-20 game are biased by risk aversion.In Arad and Rubinstein (2012) [3], the authors also conducted a costless iteration version in which a subject who chooses a number in the range 11-19 will receive 17 shekels, while he will receive an additional amount of 20 shekels if the chosen number is one less than the number chosen by the other player.They found that the proportion of subjects corresponding to level 0, 1, 2, and 3 is not significantly different between the costless iteration version treatment and their baseline treatment.They concluded that "the cost of performing an additional iteration (i.e., losing an additional certain shekel) is not the reason that subjects perform no more than three iterations . . ." In contrast to Arad and Rubinstein (2012) [3], we explicitly investigate the effect of risk aversion on the depth of thinking by measuring subject's risk aversion and observe its correlation with the depth of thinking.Arguably, our approach is a more direct test on the possible link.Further, it can be shown theoretically that even in the costless iterations version, it is also true that as players become more risk averse, players are more likely to choose high numbers. The risk premium of subjects who switched to gamble B in choice n is equal to [(the expected value of gamble A in choice n-the expected value of gamble B in choice n) + (the expected value of gamble A in choice (n−1)-the expected value of gamble B in choice (n−1))]/2. The relationship between choices in the P beauty contest game and the 11-20 game We find that there is a significantly higher proportion of subjects with a low depth of thinking in the 11-20 game.More specifically, the proportions are 0.68 in the 11-20 game and 0.35 in the P beauty contest game.The difference in proportion is significant, with a p-value equal to 0.00. One may wonder whether there is any relationship between the choices in the P beauty contest game and the choices in the 11-20 game at the within-subject level.This question is important because, even if the 11-20 game is biased by risk aversion, the problem may not be a serious concern if the ranking of the depth of thinking from the P beauty contest game is preserved in the 11-20 game.That is, there is a shift rather than a re-ordering.It turns out that there is no systematic relationship between the depths of thinking in these two games, except that the level is on average higher in the P beauty contest game.Table A3 (online appendix) reports the OLS regression, where we regress the inferred level of thinking in the P beauty contest game on the inferred level of thinking in the 11-20 game.The result shows that there is no significant relationship between these two levels.Our finding complements a recent finding by Georganas, Healy, and Weber (2015) [8] in which they found no relationship between subjects' levels observed in the two-person guessing game (Costa-Gomes & Crawford, 2006) [9] and the undercutting game.Our finding provides further evidence on the idea that there is no cross-game correlation between subjects' levels.Nagel (1995) The mean number chosen in the P beauty contest game was 28.8.Players tended to guess between 20 and 40, though there were also higher and lower guesses.Choices higher than 100×0.7 and choices lower than 10 were relatively infrequent.Overall, these patterns were similar to those found by Nagel (1995) [2]. Comparison of results in 11-20 to Arad and Rubinstein (2012) The pattern of choices observed in our experiment is similar to Arad and Rubinstein (2012) [3].The vast majority of subjects in our experiment, 83.33 percent, belong to levels 1, 2, and 3 (corresponding to 19, 18, and 17, respectively), which is similar to the 77 percent observed in Arad and Rubinstein (2012) [3].The difference is not significant under the proportion test.Similarly, only 4.17 percent of subjects are at level 0, which is similar to the 6 percent observed in Arad and Rubinstein (2012).Only 12.5 percent of the subjects exhibit level 4 or higher, which is similar to the 12.5 percent observed in Arad and Rubinstein (2012) [3]. Discussion Arad and Rubinstein (2012) [3] found that the choices in the 11-20 game cannot be explained by a mixed strategy Nash equilibrium (they assume the players to be risk-neutral).In particular, they found that the proportion of subjects choosing the low numbers of 15 and 16 is much lower than their theoretical prediction, in which the subjects are assumed to be risk-neutral.In this paper, first, we theoretically show that when players are risk averse, players will choose large numbers with larger probabilities than the risk-neutral case.Hence, our theory can explain why the low numbers are chosen less frequently, and subsequently, the mixed strategy equilibrium with risk-averse players can explain the choices in the 11-20 game. Further, we experimentally show that the choices in the 11-20 game are biased by risk aversion in the sense that the more risk-averse the player is, the more likely the higher number will be chosen.On the other hand, the choices in the P beauty contest game are not biased by risk aversion.The above analysis implies that in equilibrium, the player with 1.815 < r < 2 will choose 20; the player with 1.4708 < r < 1.815 will choose 19; the player with 0.9933 < r < 1.4708 will choose 18; the player with 0.4051 < r < 0.9933 will choose 17; and the player with 0 < r < 0.4051 will choose 16. Finally, note that the above analysis is based on the assumption that r is drawn from [0, 2], which implies that players are risk neutral or risk averse.We can also allow r to be drawn from an interval that contains r < 0 so that players can be risk loving.This will not change the above result qualitatively; i.e., it is still true that as players become less and less risk averse (and more and more risk loving), players are more and more likely to choose low numbers. Experimental Instructions The experiment was conducted in Chinese, and the original instructions were also in Chinese (available upon request).The treatment names in brackets were not shown. Instructions [P Beauty Contest Game] Welcome to our experimental study on decision-making.You will receive a show-up fee of RMB 5.In addition, you can gain more money as a result of your decisions in the experiment. You will be given a subject ID number.Please keep it confidential.Your decisions will be anonymous and kept confidential.Thus, other participants will not be able to link your decisions with your identity.You will be paid in private, using your subject ID, and in cash at the end of the experiment. If you have any questions, please feel free to ask by raising your hand, and one of our assistants will come to answer your questions.Please DO NOT communicate with any other participants. The rule of the game is as follows: Each player is asked to choose a number between 0 and 100 (0 and 100 inclusive, up to two decimal places).The player with the chosen number being closest to 0.7 times the average guess wins a prize of RMB 50.If two or more players win, the winner will be randomly chosen. Number: Subject ID: Instructions [11-20 Game] Welcome to our experimental study on decision-making.You will receive a show-up fee of RMB 5.In addition, you can gain more money as a result of your decisions in the experiment. You will be given a subject ID number.Please keep it confidential.Your decisions will be anonymous and kept confidential.Thus, other participants will not be able to link your decisions with your identity.You will be paid in private, using your subject ID, and in cash at the end of the experiment. If you have any questions, please feel free to ask by raising your hand, and one of our assistants will come to answer your questions.Please DO NOT communicate with any other participants. The rule of the game is as follows: Every player writes down a number that must be between 11 and 20 (11 and 20 inclusive) and also be an integer.Every player is matched with another player to form a pair.Your payoff will be determined as follows: First, each player in each pair receives the amount equal to the number that he/she specified.Second, in each pair, the player whose number is exactly one less than the other player receives an additional amount of RMB 20. Number: Subject ID: Figure 2 . Figure 2. The Relative Frequencies of the Chosen Numbers and the Depth of Thinking in the P Beauty Contest Game.(A) The Relative Frequencies of the Chosen Numbers in the P-beauty Contest Game; (B) The Depth of Thinking in the P Beauty Contest Game. Figure 3 . Figure 3.The Depth of Thinking in the 11-20 Game. Table 1 . Distribution of Choices. Table 2 . The Determinants of Depth of Thinking.Columns 1 and 2 report the marginal impact of risk premium on the exhibition of a low depth of thinking.The marginal impacts are estimated using probit regression.In both the P beauty contest game and the 11-20 game, the subjects who exhibited a low depth of thinking are those with level 0, 1, or 2. High risk aversion is a dummy that takes the value of 1 if the subject switches from gamble A to gamble B from choice 8 or later in the lottery task.Columns 3 and 4 report the OLS regression estimates on the depth of thinking.The risk premium of subjects who switched to gamble B in choice n is equal to [(the expected value of gamble A in choice n-the expected value of gamble B in choice n) + (the expected value of gamble A in choice (n−1)-the expected value of gamble B in choice (n-1))]/2.Robust standard errors are in parentheses.The number of subjects in regression 3 and 4 is 89 because 7 subjects always chose gamble B, and hence these subjects are not included as their risk premium cannot be estimated. Table A2 . Choice in the Gambles. Table A3 . The Relationship between the Depth of Thinking in the P Beauty Contest Game and the 11-20 Game.
7,071.4
2016-07-16T00:00:00.000
[ "Economics" ]
The Construction of Efficient Portfolios: A Verification of Risk Models for Investment Making Various statistical models have been used in estimating inputs to mean-variance efficient portfolio construction since the mid-1960s. One can argue how many factors are necessary, but there appears to be substantial evidence that statistical models outperform fundamental models for several expected returns models, such as we test in this analysis. In this paper, we show that tracking portfolios constructed with expected return rankings based on earnings forecasting and price momentum composite alpha strategies produce statistically significant excess returns and increased Sharpe Ratios when optimized with 3-factor statistical risk model. INTRODUCTION In this paper, we study the construction of US mean-variance efficient portfolios during the period 1999-2017. We construct mean-variance portfolios by maximizing the 10-factor U.S. Expected Return stock selection model (USER) alphas and constraining Tracking Error with respect to the S&P 500 benchmark using 3-factor risk model of Blume et al. [1] 1 . The main finding of this paper is that the mean-variance efficient portfolios produce statistically significant portfolio excess returns in the US market. The organization of the paper is as follows. The first section describes the construction of efficient portfolios, estimation of covariance matrix with multi-factor models, and the data used 1 An assumption underlying many studies is that the market model, or more generally a model with one factor common to all securities, generates realized returns. In such a one-factor model, realized returns are the sum of an asset's response to a stochastic factor common to all assets and a factor unique to the individual asset. In the last decade, there has been much interest in models with more than one common stochastic factor, using either pre-specified factors, like Fama and French [2] 3-factor model, or factors identified through factor analysis or similar multivariate techniques. Factor analysis and similar factor analytic techniques have on occasion played an important role in the analysis of returns on common stocks and other types of financial assets. Farrar [3] may have been the first to use factor analysis in conjunction with principal component analysis to assign securities into homogeneous correlation groups. King [4] used factor analysis to evaluate the role of market and industry factors in explaining stock returns. These two studies sparked an interest in multi-index models, and a rich body of empirical work soon emerged. Examples include Elton and Gruber [5,6], Meyer [7], Farrell [8], and Livingston [9], among others. The major goal of these earlier studies was to establish the smallest number of "indexes" needed to construct efficient sets. Factor models have been used in the tests of arbitrage pricing theory and its variants. See for example, Ross and Roll [10] and Dhrymes et al. [11][12][13], to cite a few from the large literature. in construction of size ranked portfolios in estimating common risk factors. The second section describes the expected excess return model used in the study, statistical estimation method, and the data. The third section describes construction of tracking portfolios and presents portfolio statistics. The final section contains concluding remarks. CONSTRUCTING EFFICIENT PORTFOLIOS The Markowitz portfolio construction approach is based on the premise that mean and variance of future outcomes are sufficient for rational decision making under uncertainty, to identify the best opportunity set, efficient frontier, where returns are maximized for a given level of risk, or minimize risk for a given level of return. The reader is referred to Markowitz [14,15] for the seminal discussion of portfolio construction and management. The two parameters needed are the portfolio expected return, E(R p ) is calculated by taking the sum of the security weights, w multiplied by their respective expected returns, and the portfolio standard deviation is the sum of the weighted covariances. where, E = {µ 1 , µ 2 ,...,µ N } is N × 1 vector of expected security returns, (N is the number of candidate securities), is the N × N covariance matrix, W = {w 1 , w 2 ,..., w N } is the vector of weights, and 1 is the unit column vector. Sum of weights in (3) indicates that the portfolio is fully invested. One can construct infinite number of Mean-Variance efficient portfolios. Optimal portfolio choice decision will be determined by an investors' risk tolerance 2 . Following Markowitz's [14,[16][17][18] general portfolio optimization objective function is: where, λ is the coefficient of relative risk aversion of the investor 3 . Accurate characterization of portfolio risk requires an accurate estimate of the covariance matrix of security returns. Estimation of the covariance structure is almost always based on a linear return generating multi-factor model (MFM) in the form of: The non-factor, or asset-specific return on security jẽ j,t is the residual return of the security after removing the estimated impacts of the finite number of K factors where 1 ≤ K ≤ N. The termf k,t is the rate of return of factor "k, " which is independent of securities and affects the security's return through its exposure coefficient β jk . Under the assumption that the residual return e j,t is not correlated across securities the covariance matrix of the securities is reduced to form: where: B in (7) is the matrix of exposure coefficients, also referred as "loadings" in the literature, θ in (8) is the covariance matrix of the factors, and in (9) is the covariance of the residuals. The very first model used in the literature is Treynor's market model that led to development of the Capital Asset Pricing Model (CAPM) 4 . There is a rich volume of research covering multi factor models starting with King [4]. In this paper, we use a statistical risk model developed by Blume et al. [1], (BGG). Statistical factor models deduce the appropriate factor structure by analyzing the sample asset returns covariance matrix. There is no need to pre-define factors and compute exposures, as required by fundamental factor models. The only inputs are a time-series of asset returns and the number of desired factors. BGG has shown that return generating model based on factors analysis estimation is superior to commonly used multi-factor models used in the literature. Data and Estimation Methodology The empirical analyses to estimate the factors use monthly returns of 444 sets of size-ranked portfolios of NYSE stocks constructed from the CRSP file. The first set consists of all securities in the CRSP files with complete data for the 5 years 1980 through 1984. These securities ranked by their market value as of December 1979 and then partitioned into 30 equally weighted size-ranked portfolios with as close to an equal number of securities as possible. This process is repeated for each rolling 5-year period every month to December 2017 with each set consisting 30 monthly portfolio returns with 60 observations. We use the maximum likelihood method (MLM) to estimate the factor models; the usual way to assess the number of required factors is to rerun the procedure, successively increasing the number of factors until the X 2 test for the goodness of fit developed by Bartlett [25] indicates that the number of factors is generally shown to be sufficient in explaining returns. To use this criterion, one must specify the level of significance, often arbitrarily set at 1 or 5 percent. The level of significance is important since there is a direct relation between the level of significance and the number of significant factors. BGG's findings indicate that the number of required factors varies over time. Their analysis of the required number of factors reveals a positive relation between the number of factors and the variability of returns during the estimation period. A rationale for this finding is that during periods of relatively low volatility, most of the volatility is firm specific and it is difficult to identify the common factors. In more volatile times, the common factors are relatively more important than the firm-specific factors, making it easier to identify them. Their findings indicate that median number of factor required to explain the returns at 5 percent confidence level is three. In Figure 1 we plot the standard deviation of portfolio 1, (small cap), portfolio 30, (large-cap), and the number of factors required at 5 percent confidence level. The number of factors needed during the study period is between two and four. In this paper, we set the number of factors to three rather than varying them over time based on Bartlett's goodness of fit criteria. For each security in our universe and the benchmark (S&P 500), we estimated the factor loadings over the same period in (5) with three factors extracted from 30 size ranked portfolio returns. We then estimated the covariance matrix for each month, based on the previous 5 years of monthly data for the securities in our universe and the benchmark as: Note that the MLM estimation extracts orthogonal factors and the variance of the factors is set to unity by default. That is, Ψ in (9) is reduced to N × N unit matrix. In this paper we assume that factor loadings are stationary over the month (B t+1,k = B t,k ) in estimating the weight of each security in tracking portfolio. ESTIMATION OF EXPECTED RETURNS There are many approaches to security valuation and the creation of expected returns. We believe that asset managers use security analysis and stock selection models consisting of reported earnings, forecasted earnings and financial data 5 . Graham and Dodd [27] recommended that stocks be purchased on the basis of the price-earnings (P/E) ratio. The "low" PE investment strategy was discussed in Williams [28], the monograph that influenced Harry Markowitz and his thinking on portfolio construction. Bloch et al. [29] and Haugen and Baker [30,31] advocated models incorporating earnings-to-price (EP), book-value-toprice, BP, cash flow-to-price, CP, sales-to-price, SP, and other fundamental data. Guerard et al. [32,33] added price momentum (PM), price at t-1 divided by the price 12 months ago, t-12, and consensus temporary earnings forecast (CTEF) to expected returns modeling. They denoted the stock selection model as United States Expected Returns (USER). They reported, among other results, that: (1) the EP variable had a larger average weight than the BP variable; (2) the relative PE, denoted RPE, the EP relative to its 60-month average had a higher average weight than the PE variable; and (3) the composite earnings forecast variable, CTEF, had a larger weight than the RPE variable. In fact, in the USER model, only the price momentum variable, PM, had a higher weight than the CTEF variable (and only by one percent, at that) 6 . In this paper, we use the same USER Model. TR t+1 = a 0 + a 1 EP t + a 2 BP t + a 3 CP t + a 4 SP t + a 5 REP t + a 6 RBP t + a 7 RCP t + a 8 RSP t + a 9 CTEF t + a 10 PM t + e t (11) where : EP = [earnings per share]/[price per share] = earnings − price ratio; 6 Wall Street practitioners have embraced the "low PE" approach for well over 50 years. The low PE strategy is a form of the contrarian investment approach associated with Bernard [34] and Dremen [35,36]. The authors believe in the low PE strategy, but not as the exclusive strategy. There is extensive literature on the impact of individual value ratios on the cross section of stock returns. We go beyond using just one or two of the standard value ratios (EP and BP) to include the cash-price ratio (CP) and/or the sales-price ratio (SP). Several major papers on combination of value ratios to predict stock returns (that include at least CP and/or SP) are Fama and French [2,37,38], Bloch et al. [29], Chan et al. [39], Blin et al. [40], Guerard, Gültekin, and Stone [41], and Haugen and Baker [30,31]. Given concerns about both outlier distortion and multicollinearity, Bloch et al. [29] tested the relative explanatory and predictive merits of alternative regression estimation procedures: OLS, robust regression using the Beaton and Tukey [42] bi-square criterion to mitigate the impact of outliers, latent root to address the issue of multicollinearity [see [43]], and weighted latent root, denoted WLRR, a combination of robust and latent root. The Guerard et al. [33] USER model test substantiated the Bloch et al. [29] approach, techniques, and conclusions that WLLR works best among the alternative linear predictive models. Data and Estimation For each security, we use monthly total stock returns and prices from CRSP files, earnings book value cash flow, net sales from quarterly COMPUSTAT files, and consensus earnings-per-share, forecast revisions and breadth from I/B/E/S files. We construct the variables used in (8) for each month starting in January 1980. The USER model is estimated using WLRR analysis over the 60 month (5 year) moving window for each period to identify variables statistically significant at the 10% level. The model uses the normalized coefficients as weights over the past 12 months with Beaton-Tukey outlier adjustment. We use the statistically significant coefficients to estimate the next month's expected return rank, E i , for each security. The USER estimation conditions are virtually identical to those described in Guerard et al. [32,33,44]. PORTFOLIO CONSTRUCTION We construct monthly long only, i.e., w i ≥ 0, portfolios to track S&P 500 index with minimum tracking error by solving the following equation: where λ is the relative risk aversion, 1 − is the unit, vector X t = {x 1,t , x 2,t , . . . , x N,t } is the vector of binary variables that indicate if the security i is included in the portfolio in month t, Z t = { x 1,t − x 1,t−1 , x 2,t − x 2,t−1 , ..., x N,t − x N,t−1 } is the vector of binary variables to account for security turnover, c is the transaction cost, p is the portfolio turnover limit percentage, and M is the maximum number of securities allowed in the portfolio. Variance of S&P 500 for the period in Equation (12) is estimated with Equation (10) using the factor loadings of the 3-factor model. We specifically solve Equation (12) with a relatively small number of securities (M), set at 50 and 100 or less, transactions cost (c), set 150 basis points each way, and portfolio turnover (p) set at 8 percent or less 7 . In Table 1, we present portfolio statistics for each year for relative risk aversion of 0.01, 0.05, and 0.10. Excess return is the annual portfolio return net of truncations costs in excess of the annual return of S&P 500. Relative Sharpe ratios is portfolio Sharpe ratio divided by Share ratio of S&P 500. The active average excess returns of the USER model are statistically significant. Tracking error is not statistically significant for 100security portfolio. SUMMARY AND CONCLUSIONS Investing with fundamental, expectations, and momentum variables is a good investment strategy over the long run. The use of multi-factor risk-control significantly improves portfolios performance relative to the benchmark. We considered long only portfolio construction in this study. Construction of realistic Long-Short portfolios are not feasible under these settings unless one assumes that securities are always available to borrow to short sell. However, there are various actively traded derivative securities based on S&P 500 index, the benchmark used in this study. Portfolios constructed in the study tracks the S&P Index with reasonably low tracking error. With the use of these derivative securities, it is possible to expand the opportunity set for investors. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation, to any qualified researcher.
3,659.6
2020-10-29T00:00:00.000
[ "Economics", "Computer Science" ]
Effective apsidal precession from a monopole solution in a Zipoy spacetime In this work, we examine the orbit equations originated from Zipoy’s oblate metric. Accordingly, the solution of Einstein’s vacuum equations can be written as a linear combination of Legendre polynomials of positive definite integers l. Starting from the zeroth order l=0\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$l=0$$\end{document}, in a nearly newtonian regime, we obtain a non-trivial formula favoring both retrograde and advanced solutions for the apsidal precession, depending on parameters related to the metric coefficients. Using a Chi-squared statistics, we apply the model to the apsidal precessions of Mercury and asteroids (1566 Icarus and 2-Pallas). As a result, we show that the obtained values favor the oblate solution as a more adapted approach as compared to those results produced by Weyl’s cylindric and Schwarzschild solutions. Moreover, it is also shown that the resulting solution converges to the integrable case γ=1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\gamma =1$$\end{document} in the sense of the Zipoy–Voorhees metric. An interesting work published by Zipoy [27] investigates quasi-oblate spheroidal and prolate coordinates by calculating the vacuum Einstein's equations to study general properties of the metrics such as their topology, asymptotic behaviour, singularities and stability. Moreover, he found that those metrics present a nearly newtonian solution from a linear combination of Legendre polynomials. Bearing in mind that the astrophysical phenomena depend on the form of objects, different metrics must provide different aspects of the background physics of the phenomena in a lower gravitational field regime, as compared to the strong Einstein gravity. We use the term nearly newtonian in the sense of [28], and [29], as an intermediate strength of the gravitational field between GR and newtonian gravitational field in such a way that there is no constraints a priori on the field strength but only on the related movement (geodesic) equations. Needless to say, whenever the presuppositions of the weak field regime and the slow motion condition are applied and the expansion parameters of the metric are set, it leads naturally to the post-newtonian regime [30]. This paper also aims at investigating how different spacetimes may describe an astrophysical phenomenon with departure from a spherical geometry. In the second section, we make a brief review of Zipoy's work on oblate static metric and the "monopole" solution that resides on the zeroth degree of Legendre polynomials with a calculation of the related orbit equation. In the third section, the calculations of a nonstandard expression for the perihelion shift are shown with a comparison with the standard Einstein's result and Weyl's axial metric. We also apply the model to analyse the apsidal precession of the asteroids Icarus and 2-Pallas of the inner and outer solar system, respectively. Finally, we make the final remarks in the conclusion section. Form and general solution of Zipoy's metric We consider the effects in a single plane of orbits, which is compatible with the observed movement of the planets around the Sun limited roughly to the plane of their orbits. Considering the Sun in the center of the circular base of a cylinder and a planet (or a small celestial object) as a particle with mass m orbiting its edge, it can be described by Weyl's line element [31] where the coefficients λ = λ(ρ, z) and σ = σ (ρ, z) are the Weyl potentials. This metric is diffeomorphic to the Schwarzschild's metric and is asymptotically flat [27,[31][32][33][34]. Differently from the works of [35][36][37][38] and [39], where the authors use a mass distribution to model galactic relativistic disks with Weyl's exact solution of Einstein equations, we investigated in [40] approximated solutions of this metric for a test particle in the perihelion precession by expanding the coefficient functions (or potentials) of the metric into a Taylor's series. As a result, e.g., we obtained the perihelion shift of Mercury about 43.105 arcsec/century in accordance with observations. Recently, an additional relativistic effect to the apsidal precession of Mercury was proposed as a result from "interacting terms" on the second-post-newtonian contribution [41] evincing that low-velocity limit regimes of GR is still an import arena of research in the realm of the astrophysical phenomena. To obtain the quasi-oblate coordinates from Weyl coordinates, a change of variable can be applied in such a form ρ = a cosh v cos θ and z = a sinh v sin θ , and a is a length parameter. The resulting line element is given by where (v, θ ) are the quasi-oblate coordinates. Variations of the coordinate v produce ellipsoids intertwined by hyperboloids built with the coordinate θ . Moreover, the exterior gravitational field is given by Einstein's vacuum equations where the notation (, v), (, θ ) and (, vv), (, θθ) denote respectively the first and the second derivatives with respect to the variables v and θ . Noting that Eq. (3) is just Laplace's equation in oblate coordinates, a solution of the coefficient σ can be found. Firstly, a change of variables can be made with x = sinh v and y = sin θ , and after using the method of separation of variables, one can write σ (x, y) = P(x)Q(y), and find and their resulting separated equations where l are the degree of Legendre polynomials. The solutions P(x) and Q(y) are given by the Legendre polynomials of first kind and both Legendre polynomials of first and (the complex) second kind, respectively. This set of equations and solutions were also discussed in [42][43][44]. Due to the structure of the line element in Eq. (2), we only need the coefficient σ to produce a nearly newtonian gravitational regime by the component g 44 [28]. For this reason, we are only interested in the solution for the coefficient σ . Following the results in [27], for the "monopole" solution l = 0, one can obtain: and the σ (r ) potential is given by being 0 ≤ arctan a r ≤ π , β = m a and r = a sinh v. The quantities a and m are length parameters, being β a dimensionless quantity. Hereon, we consider only a and β as fundamental parameters for our further analysis. This new change of variable leads to the line element In this original work, Zipoy showed when r → ∞, the Eq. (11) turns into an isotropic Schwarzschild line element and the set of coordinates (r, θ, φ) turns the usual spherical coordinates. We stress that the non-standard ingredient of this work is the space-time itself: rather than some deformation of a spherically symmetric field, we consider the "monopole" Zipoy's original metric vacuum solution as a model for the local solar-system gravitational field on test-particle orbiting its center. Hence, we do not use any energy-momentum tensor to propose a general relativistic disk-like model by using the Zipoy-Vorhees metric [27,45] which has been vastly explored in astrophysical literature [37][38][39] particularly for galaxy modelling. The Zipoy-Vorhees metric is referred to as γ -metric and the γ parameter can be identified in Eq. (9) as γ = β 2 + 1. The two possible non-chaotic (integrable) solutions are when γ is "nearly-minkowskian" (γ −→ 0) or "nearly-Schwarzschildian"(γ −→ 1). A larger discussion on Zipoy-Vorhees metric and variants can be found in [46,47,56]. It has been point out that the Zipoy-Vorhees metric are a form of the static limit of the Tomimatsu-Sato family of solutions [48,49], but the underlying source of that metric, originally proposed by Voorhees, still remains an open problem. Moreover, Gibbons and Volkov [50] also explored the oblate Zipoy-Voorhees metric, rather than just a deformation of the Schwarzschild one, discussing the consequences of a ring wormhole. The properties of the γ -metric and in particular the motion of test particles have been investigated also in [51][52][53][54][55][56][57][58][59]. Orbit equation for the "monopole" solution l = 0 The monopole solution of Zipoy's metric has a two-sheeted topology (involving two asymptotically flat regions) with both positive and negative θ and r coordinates. In order to correspond to the distribution of matter in the known astrophysical systems (time-like trajectories), we restrain the r coordinate to its positive values with the θ coordinate resigned to the plane of the orbits, since each sheet remains asymptotically Schwarzschil-dian, and the g 33 component is positive, there are no closed time-like curves. We consider a constraint to restrain the movement of a test-particle to the plane of the orbit setting the coordinate θ = 0. Hence, we have a constraint on velocities where we denote v α = dx α dτ . Thus, we also denote the quantities v r = dr dτ , v φ = dφ dτ , and v t = dt dτ . Moreover, using Eqs. (11) and (12), one can obtain the following expression To proceed further, we need to know the conserved quantities. This can be obtained using the Euler-Lagrange equations, where L is the Lagrangian functional commonly denoted as L = 1 2 g μνẋ μẋ ν . For the interested case, we set the dependence ofẋ μ for the coordinates φ and t. Hence, one finds and also dt dτ where we denote the conserved quantities L for the specific orbital angular momentum and E for the specific orbital energy. With those previous results, we can rewrite Eq. (13) in a form and after a little algebra, one finds Taking a change of variable u = 1 r , we can find an orbit equation and developing the previous equation, we have where we denote C(u) = 1 + E 2 e −2σ (u) . Equivalently, we can write where we denote α(u) = (1 + a 2 u 2 ) β 2 . Hence, a more convenient form for the resulting orbit equation can be written as It is noteworthy to point out that this equation is a highly nonlinear type, even in the simplest "monopole" case with l = 0 and θ = 0. Analysis on apsidal precession To work with Eq. (22), we attenuate the field strength by analyzing the decaying terms and by the magnitude of the β parameter, which is related to the coefficient σ by Eq. (10). Firstly, we start truncating high orders of the variable u constrained to u 4 , since the effects O(u 5 ) in solar system scale are negligible [60]. Hence, Due to the fact that the previous orbit equation still remains strongly nonlinear, a general β parameter on α(u) compromises the integrability of the equations of motion, which makes unpracticable to get any closed analytic solution. We can study approximate solutions if we impose that the parameter β is small, then the length parameter a must be large. Moreover, for small values of the β parameter, the term α(u) can be expanded as α(u) = 1 + β 2 a 2 u 2 + O(u) 4 . Clearly, the third order will produce terms of orders higher than u 4 in the main equation in Eq. (23), so the expansion in the term α(u) is truncated up to u 2 . On the other hand, since E should be the specific orbital energy, from the term C(u) we find that E 2 e −2σ (u) >> 1. These two considerations lead us to a more treatable orbit equation in such a form Fig. 1 Pictorial view of the oblate coordinates in the plane (v, θ) with a hyperboloid and centered ellipsoid. It is shown a reduction of the oblate coordinates into a two dimensional plane with θ = 0. In this case, we have a two dimensional ellipsoid where r → 0 is transformed into a singular ring (in the sense of Riemann invariants are infinite). In the case r → ∞, the elliptical plane approaches to a circular plane du dφ With the fact that the variable u can be related with the oblate angles in such a way r = ax = a sinh v, from Eq. (10), we can write e −4σ (v) = e 4β arctan(csch v) . This allows us to study a closed positive infinite endpoints (asymptotic regions) of the orbit where v = [0, +∞]. At v → +∞, the ellipsoid approaches to a circular orbit and at v → 0 it approaches to a ring singularity [27], as illustrated in Fig. 1. Then elliptic trajectories can be studied inbetween from their respective endpoints, since the potential σ does remain finite. Hence, using Eq. (10) and examining the tendencies, close to circular orbits with v → +∞, then σ (v) approaches 0, and the exponential term e −4σ (v) approaches 1. On the other hand, close to singularity, one can expand the related functions around zero (v → 0) of the argument of the exponential that leads to −4σ (v) = −2βsgn(1/v)π − v = −2βsgn(+∞)π = −2βπ, and the exponential term approaches e −2βπ , where sgn denotes the sign function. Thus, one can obtain two orbit equations in such a limits, respectively, A good estimate of an effective orbit equation can be obtained by the asymptotic matched expansions given by the sum of Eqs. (25) and (26) and their difference with an "overlapped" orbit equation that results from setting β = 0 in the two previous equation obtaining the same unique form. Hence, we can find the related orbit equation with singularityfree flat region in a form du dφ with A, B and C respectively and Using the method as shown in [5], we can work with the previous orbit equations analytically and the deviation angle δφ can be found using with the constraint F(u 0 ) = u 0 for a near circular orbit. The function F(u) is denoted by With those informations at hand, we can evaluate F(u) straightforwardly and the related algebraic equation with solution By using Eq. (31), it lead us to the "Zipoy's precession formula" given by the deviation angle Hence, we have an analytic relation in a flat space avoiding the asymptotic regions. Interestingly, besides the advanced solution, this formula also provides a retrograde precession in terms of the conserved quantities and initial parameters. It is noteworthy to point out that the hyperbolic term persists in the result evincing the propagation of the nonlinear effects from the Einstein equations even with the breakage of the diffeomorphic coordinate transformations. To obtain the correct physical units, we use the known forms for the specific orbital energy E = −G M 2γ and the specific orbital momentum L 2 = μp, with μ = G M and p = l(1 − 2 ). The terms M, l and denote the central Sun mass, the semi-major axis and the orbital eccentricity, respectively. The Newton's universal gravitational constant is denoted by G. Since β is small, the hyperbolic exponential can be approximated to e −2βπ ∼ 1 − 2βπ. It is important to stress that high orders on β are neglected. Accordingly, using Eq. (36), one can obtain A more familiar expression for apsidal precession can be obtained by using the orbital period P in days in such a way we have the final form In order to use physical measurements, we adopt the international system of measurement Bureau International des Poids et Mesures [74] setting one year 1yr = 365.256d, the speed of light c = 299,792,458 m/s [26,74] and the mass of sun M = 1.98853 × 10 30 kg. The period P is given by P = T (24)(3600) and T is the sidereal orbital period in days. In the case of Mercury, we use T = 87.969 days (NASA Mercury Fact Sheet. https://nssdc.gsfc.nasa.gov). We use 9 data points concerning observations on the perihelion advance of Mercury in units of arcsecond per century (arcsec cy −1 ) as shown in Table 1. We denote δφ sch for standard (Einstein) perihelion precession and δφ W eyl Table 1 Comparison between the values for secular precession of Mercury in units of arcsec/century( .cy −1 ) of the standard (Einstein) perihelion precession δφ sch [26] and the Weyl conformastatic solution δφ W eyl . The δφ obs stands for the secular observed perihelion precession in units of arcsec/century. In the fourth column, some observational values of perihelion precession are available. The first data point was adapted from [61] by adding a supplementary precession calibrated with the Ephemerides of the Planets and the Moon (EPM2011) [62,63] 43.20 ± 0.86 [64] 43.11 ± 0.22 [65] 43.11 ± 0.22 [66] 42.98 ± 0.09 [67] 43.13 ± 0.14 [68] 42.98 ± 0.04 [69,70] 43.03 ± 0.00 [71] 43.11 ± 0.45 [72,73] for the resulting perihelion advance using the Weyl conformastatic solution [40,76], which comes from an axiallysymmetric motion of a test particle in Weyl's line element [31]. To control the systematics, we use GnuPlot 5.2 software to compute non-linear least-squared fitting by using the Levenberg-Marquardt algorithm for the goodness-of-fitting to data. From this algorithm, we obtain the values for the parameters and the related reduced chi-squared (χ 2 red ). Since Eq. (38) has a negative sign, and to obtain an advanced precession solution, we calculate its absolute value. We observed that running the parameters freely, we find that the a parameter has the same magnitude of the planetary semi-major axis as it provides a ∼ −1.15806 × 10 11 , which its absolute value is roughly close to observational value of Mercury's semi-major axis and β = 8.86038 × 10 −6 and the resulting value for the shift angle is 42.9696 arcsec cy −1 for a χ 2 red = 0.0166 and a probability p > 0.95, which represents a good fitting. It is worth noting that the negative sign for the length parameter a is a relic from the hyperbolic geometry that passed through the nonlinear effects of the initially strong gravitational field. Interestingly, the lower values of β indicates a Schwarzschild-like integrable system of the inbetween studied zone [75] which implies that such zone is an island of stability. In Table 1, we show the secular precession of Mercury in units of arcsec/century comparing with standard result of Schwarzschild solution and the cylindric Weyl solution for the perihelion shift. The obtained perihelion shift δφ Zi poy reproduces closely the observed perihelion shift with a bonus that it naturally provides elliptical orbits which makes this solution a better physical description for astrophysical purposes according to the shape, the topology and the symmetry aspects of the gravitational field. An interesting case relies on the asteroids astrodynamics. Departing from a spherical geometry, we are able to study precession of two asteroids as shown in Table 2. The first one corresponds to the asteroid Icarus. This asteroid is a near-Earth object (NEO) of the Apollo group with a very elliptical orbit. It has been regarded as a relativistic asteroid with an approximation even close to the Sun than Mercury and also a Venus and Mars-crosser. Its observational value for the perihelion precession is 10.05 arcseconds per century with semimajor axis 1.61258×10 11 m and a large eccentricity 0.82695 for an orbital period T = 408.781 days [26]. As a result, we obtained the values for the parameters a ∼ −3.21987 × 10 11 and β = 8.0222 × 10 −6 that provide a value for the shift angle 10.029 arcsec cy −1 for χ 2 red = 0.00272 and p > 0.95. In addition, as an example of retrograde precession, which is not accounted for the standard Einstein perihelion formula, we studied the 2-Pallas protoplanet, even though the available informations on 2-Pallas are still scarce. The 2-Pallas asteroid is one of the largest asteroids in asteroid belt and is a Jupiter-crosser. Its observational value for the perihelion precession is −1333.534 arcseconds per century with semimajor axis 4.14520 × 10 11 m and a large eccentricity 0.2812 for an orbital period T = 1686.43 days (available at https:// newton.spacedys.com/astdys2/index.php?pc=3.0, Asteroids Dynamic Site-AstDyS). As a result, we obtained the values for the parameters a ∼ −1.680 × 10 13 and β = 8.0222 × 10 −6 that provide a value for the shift angle −133.481 arcsec cy −1 for χ 2 red = 1245.46 and p > 0.95. In the two previous cases, the value of the β parameter remains the same and unless we find a counterproof in the near future, its value around ∼ 10 −6 must remain the same for any large object in Solar system (large asteroid, comets and planets). As shown, the produced gravitational field in this space-time is not the same as the Schwarzschild case. In this case, the Zipoy spacetime seems to be more astrophysically adapted as compared to the standard PPN solutions and it naturally provides both advance and retrograde precessions. effects on the orbits of their solutions. We have studied solutions of vacuum Einstein's equation of a quasi-oblate metric obtaining a set of solutions that depends on the Legendre Polynomials, based on Zipoy's seminal paper [27]. In hindsight, the simplest studied case was the so-called "monopole" solution for the zeroth order of Legendre polynomials l = 0. Starting from the related Lagrange equations, we have obtained the orbit equations in the asymptotic regions, which revealed to be a highly nonlinear set of equations. To obtain an analytical solution, we have studied a closed positive infinite interval to get an elliptical pattern of the orbits in-between in a flat space. As a result, we have obtained a non-standard formula for the perihelion precession depending on the dimensionless parameter β and the length parameter a. The β parameter was primarily fixed as a low magnitude to allow us to study the orbit equation and latter it was be found to be of the order of ∼ 10 −6 . In terms of the γ metric, it is compatible with the condition for an integrable system with γ −→ 1. It is worth noting to point out that no a priori assumptions concerning the strength of the field (as a weak field) were imposed. Moreover, the values of the length parameter a were adjusted numerically using the Chi-squared statistics for 9 observational datasets. We have shown that, as pointed out by Zipoy, the length parameter can be attributed to a physical meaning since it is closely related to semi-major axis. Interestingly, the values converged to the same order of magnitude of semi-major axis of Mercury. Differently from the standard Einstein's solution and the Weyl cylindrical one, the precession formula from oblate coordinates provides naturally both retrograde and advanced solutions for the perihelion precession besides the fact that elliptical orbits are also native in those coordinates, which reinforces the idea that the topological nature of the problem is now an important character and the strength of the gravitational field is highly constrained by this topology. In summary, this analysis was made in the realm of GR in a nearly newtonian limit with no need of additional extensions or modifications of the standard gravity. As future perspectives, the extended analysis of the deviation of light, radar echo and gravitational lens in the oblate metric are currently in progress.
5,371.8
2019-08-30T00:00:00.000
[ "Physics" ]
A search for a doubly-charged Higgs boson in pp collisions at sqrt(s) = 7 TeV A search for a doubly-charged Higgs boson in pp collisions at sqrt(s) = 7 TeV is presented. The data correspond to an integrated luminosity of 4.9 inverse femtobarns, collected by the CMS experiment at the LHC. The search is performed using events with three or more isolated charged leptons of any flavor, giving sensitivity to the decays of pair-produced triplet components Phi[++]Phi[--], and Phi[++]Phi[-] from associated production. No excess is observed compared to the background prediction, and upper limits at the 95% confidence level are set on the Phi[++] production cross section, under specific assumptions on its branching fractions. Lower bounds on the Phi[++] mass are reported, providing significantly more stringent constraints than previously published limits. Introduction The existence of non-zero neutrino masses may represent a signal of physics beyond the standard model (SM) [1]. The observation of a doubly-charged scalar particle would establish the type II seesaw mechanism as the most promising framework for generating neutrino masses [2]. The minimal type II seesaw model [3][4][5][6] is realized with an additional scalar field that is a triplet under SU(2) L and carries U(1) Y hypercharge Y = 2. The triplet contains a doubly-charged component Φ ++ , a singly-charged component Φ + and a neutral component Φ 0 . In this paper, the symbols Φ ++ and Φ + are used to refer also to the charge conjugate states Φ −− and Φ − . In the literature Δ and H have also been used. Our choice of the symbol Φ for the triplet components avoids possible confusion with the minimal supersymmetric model (MSSM) H + boson. The Φ ++ particle carries double electric charge, and decays to same-sign lepton pairs + α + β with flavor indices α, β, where α can be equal to or different from β. The Φ ++ * e-mail<EMAIL_ADDRESS>Yukawa coupling matrix Y Φ is proportional to the light neutrino mass matrix. The measurement of the Φ ++ → + α + β branching fractions would therefore allow the neutrino mass generation mechanism to be tested [7]. In this scenario, measurements at the Large Hadron Collider (LHC) could shed light [8][9][10][11] on the absolute neutrino mass scale, the mass hierarchy, and the Majorana CP-violating phases. The latter are not measurable in current neutrino-oscillation experiments. In this article the results of an inclusive search for a doubly-charged Higgs boson at the Compact Muon Solenoid (CMS) experiment are presented, based on a dataset corresponding to an integrated luminosity of 4.93 ± 0.11 fb −1 . The dataset was collected in pp collisions at √ s = 7 TeV during the 2011 LHC running period. Both the pairproduction process pp [12,13] and the associated production process pp → Φ ++ Φ − → + α + β − γ ν δ [14,15] are studied. It is assumed that the Φ ++ and Φ + are degenerate in mass. However, as the singly-charged component is not fully reconstructed, this requirement impacts only the cross section, as long as the mass splitting is such that cascade decays (e.g. Φ ++ → Φ + W + * → Φ 0 W + * W + * ) are disfavored [16]. The relevant Feynman diagrams and production cross sections, calculated following [13], are presented in Figs. 1 and 2. The Φ ++ → W + W + decays are assumed to be suppressed. In the framework of type II seesaw model [3][4][5][6], where the triplet is used to explain neutrino masses, this is a natural assumption: the decay width to the W + W + channel is proportional to the vacuum expectation value of the triplet (v Φ ) and, as the neutrino masses are determined from the product of the Yukawa couplings and v Φ , then large enough v Φ values would require unnaturally small Yukawa couplings. The search strategy is to look for an excess of events in one or more flavor combinations of same-sign lepton pairs coming from the decays Φ ++ → + α + β . Final states containing three or four charged leptons are considered. In addition to a model-independent search in each final state, where the Φ ++ is assumed to decay in 100 % of the cases in turn in each of the possible lepton combinations (ee, μμ, τ τ, eμ, eτ, μτ ), the type II seesaw model is tested, following [9], at four benchmark points (BP), that probe different neutrino mass matrix structures. BP1 and BP2 describe a neutrino sector with a massless neutrino, assuming normal and inverted mass hierarchies, respectively. BP3 represents a degenerate neutrino mass spectrum with the mass taken as 0.2 eV. The fourth benchmark point BP4 represents the case in which the Φ ++ has an equal branching fraction to each lepton generation. This corresponds to the following values of the Majorana phases: α 1 = 0, α 2 = 1.7. BP4 is the only case in which α 2 is non-vanishing. For all benchmark points, vanishing CP phases and an exact tri-bimaximal neutrino mixing matrix are assumed, fixing the values of the mixing angles at θ 12 = sin −1 (1/ √ 3), θ 23 = π/4, and θ 13 = 0. The four benchmark points, along with the modelindependent search, encompass the majority of the parameter space of possible Φ ++ leptonic decays. The values of the neutrino parameters at the benchmark points are compatible with currently measured values within uncertainties. The recent measurement of a non-zero θ 13 angle [17, 18] is the only exception, and influences the branching fractions at the benchmark points by a maximum of a few percent [9]. The branching fractions at the benchmark points are summarized in Table 1. The first limits on the Φ ++ mass were derived based on the measurements done at PEP and PETRA experiments [19][20][21][22][23][24] from the Tevatron and ATLAS [31-33] experiments, which set lower limits on the Φ ++ mass between 112 and 355 GeV, depending on assumptions regarding Φ ++ branching fractions. In all previous searches, only the pair-production mechanism, and only a small fraction of the possible final state combinations, were considered. The addition of associated production and all possible final states significantly improves the sensitivity and reach of this analysis. The CMS detector The central feature of the CMS apparatus is a superconducting solenoid of 6 m internal diameter with a 3.8 T field. Within the field volume are a silicon pixel and strip tracker, a crystal electromagnetic calorimeter (ECAL) and a brass/scintillator hadron calorimeter. Muons are measured in gas-ionization detectors embedded in the steel return yoke. Extensive forward calorimetry complements the coverage provided by the barrel and endcap detectors. CMS uses a right-handed coordinate system, with the origin at the nominal interaction point, the x axis pointing to the center of the LHC ring, the y axis pointing up (perpendicular to the LHC ring), and the z axis along the counterclockwise-beam direction. The polar angle, θ , is measured from the positive z-axis and the azimuthal angle, φ, is measured in the x-y plane. The inner tracker measures charged particles within the pseudorapidity range |η| < 2.5, where η = − ln[tan(θ/2)]. It consists of 1440 silicon pixel and 15 148 silicon strip detector modules, and is located in the superconducting solenoid. It provides an impact parameter resolution of ∼15 µm and a transverse momentum (p T ) resolution of about 1.5 % for 100 GeV particles. The electromagnetic calorimeter consists of 75 848 lead tungstate crystals which provide coverage in pseudorapidity |η| < 1.479 in the barrel region and 1.479 < |η| < 3.0 in two endcap regions (EE). A preshower detector consisting of two planes of silicon sensors interleaved with a total of three radiation lengths of lead is located in front of the EE. The muons are measured in the pseudorapidity range |η| < 2.4, with detection planes made using three technologies: drift tubes, cathode strip chambers, and resistive plate chambers. Matching the muons to the tracks measured in the silicon tracker results in a transverse momentum resolution between 1 and 5 %, for p T values up to 1 TeV. The detector is highly hermetic, ensuring accurate measurement of the global energy balance in the plane transverse to the beam directions. The first level of the CMS trigger system, composed of custom hardware processors, uses information from the calorimeters and muon detectors to select, in less than 1 µs, the most interesting events. The High Level Trigger processor farm further decreases the event rate from around 100 kHz to around 300 Hz, before data storage. A detailed description of the CMS detector may be found in Reference [34]. Experimental signatures The most important experimental signature of the Φ ++ is the presence of two like-charge leptons in the final state, with a resonant structure in their invariant mass spectrum. In this final state the background from SM processes is expected to be very small. For the four-lepton final state from Φ ++ Φ −− pair production, both Higgs bosons may be reconstructed, giving two like-charge pairs of leptons with similar invariant mass. Like-charge backgrounds arise from various SM processes, including di-boson events containing two to four leptons in the final state. The Z + jets and tt + jets, with leptonic W decays, contribute to the non-resonant background through jet misidentification as leptons, or via genuine leptons within jets. The W + jets and QCD multijet events are examples of large cross section processes which potentially contribute to the SM background. However, the requirement of multiple isolated leptons with high transverse momentum almost entirely removes the contribution from these processes. Monte Carlo simulations The multi-purpose Monte Carlo (MC) event generator PYTHIA 6.4.24 [35] is used for the simulation of signal and background processes, either to generate a given hard interaction at leading order (LO), or for the simulation of showering and hadronization in cases where the hard processes are generated at next-to-leading order (NLO) outside PYTHIA, as in the case of top quark related backgrounds. The TAUOLA [36] program is interfaced with PYTHIA to simulate τ decay and polarization. Signal samples in the associated production mode are generated by using CALCHEP 2.5.2 [37], as PYTHIA only contains the doubly-charged particle. The diboson and Drell-Yan events are generated using MADGRAPH 5. Trigger Collision events are selected through the use of doublelepton (ee, eμ, μμ) triggers. In the case of the ee and eμ triggers, a minimum p T of 17 and 8 GeV is required of the two leptons respectively. In the case of the μμ trigger, the muon p T thresholds changed during the data-taking period because of the increasing instantaneous luminosity. A 7 GeV p T threshold was applied to each muon during the initial data-taking period (the first few hundred pb −1 ). The thresholds were later raised to 13 and 8 GeV for the two muons, and then to 17 and 8 GeV. The trigger efficiency is in excess of 99.5 % for the events passing the selection defined below. Lepton identification The electron identification uses a cut-based approach in order to reject jets misidentified as electrons, or electrons originating from photon conversions. Electron candidates are separated into categories according to the amount of emitted bremsstrahlung energy; the latter depends on the magnetic field intensity and the large and varying amount of material in front of the electromagnetic calorimeter. A bremsstrahlung recovery procedure creates superclusters (i.e. groups of clusters), which collect the energy released both by the electron and the emitted photons. Transverse energy (E T ) dependent and η-dependent selections are applied [42]. Selection criteria for electrons include: geometrical matching between the position of the energy deposition in the ECAL and the direction of the corresponding electron track; requirements on shower shape; the impact parameter of the electron track; isolation of the electron; and further selection criteria to reject photon conversions. To reduce contamination in the signal region, electrons must pass a triple charge determination procedure based on two different track curvature fitting algorithms and on the angle between the supercluster and the pixel hits. In addition, electrons are required to have p T > 15 GeV and |η| < 2.5. Muon candidates are reconstructed using two algorithms. The first matches tracks in the silicon detector to segments in the muon chambers, whereas the second performs a combined fit using hits in both the silicon tracker and the muon systems [43]. All muon candidates are required to be successfully reconstructed by both algorithms, and to have p T > 5 GeV and |η| < 2.4. Isolation of the final state leptons plays a key role in suppressing backgrounds from tt and Z + jets. A relative isolation variable (RelIso) is used, defined as the sum of the p T of the tracks in the tracker and the energy from the calorimeters in an isolation cone of size 0.3 around the lepton, excluding the contribution of the lepton candidate itself, divided by the lepton p T . A typical LHC bunch-crossing at high instantaneous luminosity results in overlapping proton-proton collisions ('pileup'). The isolation variable is corrected for energy deposition within the isolation cone by pile-up events, by means of the FASTJET energy-density algorithm [44,45]. A description of the performance of the isolation algorithm in collision data can be found in [42,43]. In order to reconstruct hadronic τ candidates (τ h ), the 'hadron plus strips' (HPS) algorithm [46] is used, which is based on particle flow (PF) [47] objects. One of the main tasks in reconstructing hadronically-decaying τ is determining the number of π 0 mesons produced in the decay. The HPS method combines PF electromagnetic objects into 'strips' at constant η to take into account the broadening of calorimeter deposits due to conversions of π 0 decay photons. The neutral objects are then combined with charged hadrons to reconstruct the τ h decay. The τ h candidates are required to have p T > 15 GeV and |η| < 2.1. Additional criteria are applied to discriminate against e and μ, since these particles could be misidentified as one-prong τ h . The τ h candidates in the region 1.460 < |η| < 1.558 are vetoed, owing to the reduced ability to discriminate between electrons and hadrons in the barrel-toendcap transition region. In the following, the term lepton is used to indicate both light leptons (e, μ) and the τ -lepton before decay (τ ). It is not possible to distinguish between leptonic τ decay products and prompt light leptons. Therefore, in scenarios that include a τ the light lepton contribution is assumed to be a mixture of prompt and non-prompt particles and selection criteria are tuned accordingly. Beyond that there is no attempt to distinguish the origin of the light leptons. As a result, a final state e + e + τ − h could arise from In both scenarios we look for a resonance in the e + e + invariant mass, which is narrow in the case of direct signal decay to lightleptons and wide in the case of the presence of a τ in the intermediate state. Because of the reconstruction efficiency we treat the B(Φ ++ → τ + τ + ) = 100 % assumption separately and optimize the selection criteria accordingly. However a given event may be assigned to more than one signal type if it matches the corresponding final state (the above mentioned example event would contribute to all scenarios where eτ , τ τ branching fractions are non-zero assuming the event passes the respective selection criteria). Pre-selection requirements and signal selection optimization method In order to select events from well-measured collisions, a primary vertex pre-selection is applied, requiring the number of degrees of freedom for the vertex fit to be greater than 4, and the distance of the vertex from the center of the CMS detector to be less than 24 cm along the beam line, and less than 2 cm in the transverse plane. In case of multiple primary vertex candidates, the one with the highest value of the scalar sum of the total transverse momentum of the associated tracks is selected [48]. Data and simulated events are preselected by requiring at least two final-state light leptons, with p T > 20 GeV and p T > 10 GeV respectively. If pairs of light leptons with invariant mass less than 12 GeV are reconstructed, neither of the particles is considered in the subsequent steps of the analysis. This requirement rejects low-mass resonances and light leptons from B meson decays. In order to reduce the background contribution from QCD multijet production and misidentified leptons, the two least well-isolated light leptons are required to have summed relative isolation ( RelIso) less than 0.35. In case of the B(Φ ++ → τ + τ + ) = 100 % assumption, the requirement is tightened to less than 0.25. In addition, the significance of the impact parameter, SIP = ρ PV /Δρ PV , is required to be less than four for the reconstructed light leptons except for the B(Φ ++ → τ + τ + ) = 100 % assumption; here ρ PV denotes the distance from the lepton track to the primary vertex and Δρ PV its uncertainty. The remaining event sample is divided into two categories, based on the total number of final state lepton candidates. The search is then performed in various final state configurations for a set of pre-determined mass hypotheses for the Φ ++ . For each mass point, the selection criteria described in Sect. 6 are optimized using simulations, by maximizing the signal significance by means of the following significance estimator: where s is the signal expectation and b is the background expectation. The estimator comes from the asymptotic expression of significance Z = √ 2 log Q, where Q is the ratio of Poisson likelihoods P (obs|s + b) and P (obs|b). The estimator S cL applies in the case of a counting experiment without systematic errors. We do not consider systematic errors at this stage as we select optimal cuts within the top 10 % of the significance across mass points and the small variations coming from systematic uncertainties do not change the optimization significantly. The c and L subscripts refer to counting experiment and likelihood, respectively. The size of the mass window is a part of the optimization procedure and is limited by the mass resolution of the signal. Analysis categories The analysis is separated into categories based on the total number of light leptons and τ h in the reconstructed events. The decay channel with B(Φ ++ → τ τ ) = 100 % is handled separately, since the event topology is somewhat different from the final states with prompt decays to light leptons. In particular, the Φ ++ reconstructed mass peak has a much larger width due to final-state neutrinos, which affects the choice and optimization of the event selection criteria. The final signal efficiency depends on the Φ ++ production mechanism, decay channel and chosen mass point. For pair-production process and 200 GeV Φ ++ mass the selection efficiency varies from about 62 % in the eμ channel to 16 % in τ channels and only 4 % in the τ τ channel. Lower efficiency in decay channels that involve τ -leptons results from the tau ID efficiency, tighter selection criteria and the requirement of two light leptons at the trigger level. The efficiencies slightly increase at higher mass assumptions. For associated production process the selection efficiencies are decreased by about a factor of two. and τ h final states These final states are relevant for both Φ ++ production mechanisms. The associated production process yields three charged leptons and a neutrino. The pair-production process can contribute to this category if one of the four leptons is lost due to lepton identification inefficiency or detector acceptance. In order to separate signal from background, a set of selection criteria is optimized for significance for various combinations of final states and mass hypotheses. Three main categories of final states are considered: Φ ++ decays to light leptons (ee, eμ and μμ), Φ ++ decays to a light-lepton and a τ -lepton (eτ , μτ ) and Φ ++ decay to τ -leptons (τ τ ). Both hadronic and leptonic τ decays are considered. At least two light leptons in the final state are required because of trigger considerations. Because of the high mass of the Φ ++ , its decay products are very energetic, allowing for signal separation through requirements on the scalar p T sum of the three leptons ( p T ) as a function of m Φ . In addition, as a number of important background processes contain a Z boson, events with opposite-sign same-flavor light lepton combinations are rejected if |m( + − ) − m Z | is below a channel-dependent threshold. A selection on the opening angle between the samecharge leptons, Δϕ, is also applied. Background processes, such as the production of a Z boson recoiling from a jet misidentified as a lepton, yield leptons with a larger opening angle than those originating from Z decay. For the pairproduction of two signal particles we expect both lepton pairs to be boosted and the opening angle to be smaller. A loose requirement on the missing transverse energy (E miss T ), defined as the negative vectorial momentum sum of all reconstructed particle candidates, is applied in the eτ, μτ and τ τ channels in order to further reduce the background contributions, especially from Drell-Yan processes. Finally, the mass window (m lower , 1.1m Φ ) is defined. The lower bound, m lower , depends on the final state. The mass windows are chosen by requiring high efficiency for signal events across a variety of final states (including τ leptonic decays, which contribute significantly in some scenarios), while keeping the analysis independent of the assumed relative branching fractions. The selection criteria used in this category are summarized in Table 2. For the 100 % branching fraction scenarios, both signal and background events are filtered based on the leptonic content. For example, when showing results for 100 % branching fraction to electrons, only events containing electrons are used. For the four benchmark points, the contributions from all possible lepton combinations are taken into account and added to the relevant distributions according to the relative branching fractions. The selection criteria of eτ and μτ channel are used for the four benchmark points to account for various final state signatures. After the application of the selection criteria, the event yields observed in data are in reasonable agreement with the sum of the expected contributions from backgrounds. The mass distributions for the simulated total background and the hypothesized BP4 benchmark point signal after applying the pre-selections are shown in Fig. 3, along with the measured yields. The event yield evolution as a function of the selections applied is also shown. For the final analysis, the background estimate is derived from data, using the methods described in Sect. 7. The requirement of a fourth lepton substantially reduces the background. The Z veto is not applied for scenarios involving only light-leptons because of low signal efficiency. A mass window around the doubly charged Higgs boson mass hypothesis is defined. It consists of a two-dimensional region in the plane of m( + + ) vs. m( − − ), where m( + + ) and m( − − ) denote the reconstructed same-sign dilepton masses. The window boundaries are the same as in Sect. 6.1. Because of the large width of the reconstructed mass peak, the mass window is not selected in the case of B(Φ ++ → τ + τ + ) = 100 % in order to keep the signal efficiency high. The selection criteria used in this category are summarized in Table 3. The resulting mass distributions are shown in Fig. 4. Good agreement is seen between the event yields observed in the data and the expected background contributions. Sideband method A sideband method is used to estimate the background contribution in the signal region. The sideband content is determined by using same-charge di-leptons with invariant mass in the ranges (12 GeV, m lower ) and (1.1m Φ , 500 GeV) for Fig. 4 Left: Like-charge invariant mass distribution for the four-lepton final state for MC simulation and data after pre-selection. Where τ decay products are present in the final state, a visible mass is reconstructed that does not include the contribution of neutrinos. The expected distribution for a Φ ++ with a mass of 350 GeV for the benchmark point BP4 is also shown. Right: Event yields as a function of the applied selection criteria the three-lepton final state selection. In the case of the fourlepton final state, the sidebands comprise the Φ ++ and Φ −− two-dimensional mass plane with dilepton invariant masses between 12 GeV and 500 GeV, excluding the candidate mass region. The upper bound of 500 GeV is chosen due to the negligible expected yields for both signal and background at higher masses, for the data sample used. The sideband content is determined after the preselection requirements in order to ensure a reasonable number of events. For each Φ ++ mass hypothesis, the ratio of the event yields in the signal region to those in the sideband, α, is estimated from the sum of all SM background MC processes: where N SR and N SB are the event yields in the signal and sideband regions respectively, estimated from simulated event samples. Modifications to this definition are made in the case of very low event counts: -If N SB = 0, then α = N SR is assumed -If N SR is less than the statistical uncertainty, then the statistical uncertainty of the simulated samples is used as an estimate for the signal region. With an observation of N Data SB in a sideband, the probability density function for the expected event rate is the Gamma distribution with mean (N Data SB + 1) and dispersion N Data SB + 1 [49]. The predicted background contribution in the signal region is given by: with a relative uncertainty of 1/ N Data SB + 1, where N BGSR is the number of background events in the signal region estimated from the data, and N Data SB is the total number of data events in the sidebands after applying the preselection requirements. Where the background estimate in the signal region is smaller than the statistical uncertainty of the MC prediction, then it is assumed that the background estimate is equal to its statistical uncertainty. Independently of this method, control regions for major backgrounds (tt, Z + jets) are defined to verify the reliability of the simulation tools in describing the data, and good agreement is found. ABCD method As a mass window is not defined for the 4τ analysis, and comprises too large an area in the background region for the 3τ analysis with m Φ ++ < 200 GeV, the sideband method cannot be used for these modes. Instead, we use the 'ABCD method', which estimates the number of background events after the final selection (signal region A) by extrapolating the event yields in three sidebands (B, C and D). The signal region and three sidebands are defined using a set of two observables x and y, that define four exclusive regions in the parameter space. The requirement of negligible correlation between x and y ensures that the probability density function of the background can be factorized as ρ(x, y) = f (x)g(y). It can be shown that the expectation values of the event yields in the four regions fulfill the relation λ A /λ B = λ D /λ C . The quantities λ X are the parameters of the Poisson distribution, which for one measurement correspond to the event counts N X . The estimated number of background events in the signal region is then given by The variables RelIso and |m( + − ) − m Z | for the 3τ analysis and RelIso and p T for the 4τ analysis are chosen based on their low correlation and the available amount of data in the sidebands. High values of RelIso populate the sidebands with background events, where jets have been misidentified as leptons. Failing the |m( + − ) − m Z 0 | > 50 GeV requirement gives mainly background contributions from the Drell-Yan and di-boson processes, whereas low values of p T can probe various background processes that possibly contain genuine leptons, but do not belong to the signal phase space. The estimated number of background events agrees well with both the prediction from simulation and the number of data events observed in the signal region. Systematic uncertainties The impact on the selection efficiency of the uncertainties related to the electron and muon identification and isolation algorithms, and the relevant mis-identification rates, detailed in [42,43,46,50,51], are estimated to be less than 2 % using a standard 'tag-and-probe' method [52] that relies upon Z → + − decays to provide an unbiased and high-purity sample of leptons. A 'tag' lepton is required to satisfy stringent criteria on reconstruction, identification, and isolation, while a 'probe' lepton is used to measure the efficiency of a particular selection by using the Z mass constraint. The 2 % uncertainty that is assigned to lepton identification comprises also the charge misidentification uncertainty. The ratio of the overall efficiencies as measured in data and simulated events is used as a correction factor in the bins of p T and η for the efficiency determined through simulation, and is propagated to the final result. The τ h reconstruction and identification efficiency via the HPS algorithm is also derived from data and simulations, using the tag-and-probe method with Z → τ + (→ μ + + ν μ + ν τ )τ − (→ τ h + ν τ ) events [46]. The uncertainty of the measured efficiency of the τ h algorithms is 6 % [46]. Estimation of the τ h energy-scale uncertainty is also performed with data in the Z → τ τ → μ + τ h final state, and is found to be less than 3 %. The τ h charge misidentification rate is measured to be less than 3 %. The theoretical uncertainty in the signal cross section, which has been calculated at NLO, is about 10-15 %, and arises because of its sensitivity to the renormalization scale and the parton distribution functions (PDF) [13]. The ratio α used to estimate the background contribution in the signal region is affected by two main uncertainties. The first is based on the uncertainty of the ratio Table 4. The first eight rows in the table concern the signal and the final two rows the background processes. Correlations of systematic uncertainties within a given decay mode and between different modes are taken into account in the limit calculations. Results and statistical interpretation The data and the estimated background contributions are found to be in reasonable agreement for all final states. Only a few events are observed with invariant masses above 200 GeV, consistent with SM background expectations. The dataset is used to derive limits on the doubly-charged Higgs mass in all decay channels. A CL S method [55] is used to calculate an upper limit for the Φ ++ cross section at the 95 % confidence level (CL), which includes the systematic uncertainties summarized in Table 4. As the systematic uncertainties are different for each final state, the signal and background yields are separated into five orthogonal categories, based on the number of light leptons and τ -leptons. As an example, event yields in four mass points for BP4 can be found in Table 5. A full list of mass points considered for the limit calculation is given in the end of Sect. 4. When setting limits on 'muon and electron only' channels, we only Table 6. The cross section limits significantly improve on previously published lower bounds on the Φ ++ mass. New limits are also set on the four benchmark points, probing a large region of the parameter space of type II seesaw models. Summary A search for the doubly-charged Higgs boson Φ ++ has been conducted using a data sample corresponding to an integrated luminosity of 4.93 ± 0.11 fb −1 collected by the CMS experiment at a center-of-mass energy of 7 TeV. No evidence for the existence of the Φ ++ has been found. Lower bounds on the Φ ++ mass are established between 204 and 459 GeV in the 100 % branching fraction scenarios, and between 383 and 408 GeV for four benchmark points of the type II seesaw model, providing significantly more stringent constraints than previously published limits.
7,680.4
2012-07-11T00:00:00.000
[ "Physics" ]
Low-Cost IoT-Based Sensor System: A Case Study on Harsh Environmental Monitoring Wireless Sensor Networks (WSNs) are promising technologies for exploiting in harsh environments such as can be found in the nuclear industry. Nuclear storage facilities can be considered harsh environments in that, amongst other variables, they can be dark, congested, and have high gamma radiation levels, which preclude operator access. These conditions represent significant challenges to sensor reliability, data acquisition and communications, power supplies, and longevity. Installed monitoring of parameters such as temperature, pressure, radiation, humidity, and hydrogen content within a nuclear facility may offer significant advantages over current baseline measurement options. This paper explores Commercial Off-The-Shelf (COTS) components to comprise an installed Internet of Things (IoT)-based multipurpose monitoring system for a specific nuclear storage situation measuring hydrogen concentration and temperature. This work addresses two major challenges of developing an installed remote sensing monitor for a typical nuclear storage scenario to detect both hydrogen concentrations and temperature: (1) development of a compact, cost-effective, and robust multisensor system from COTS components, and (2) validation of the sensor system for detecting temperature and hydrogen gas release. The proof of concept system developed in this study not only demonstrates the cost reduction of regular monitoring but also enables intelligent data management through the IoT by using ThingSpeak in a harsh environment. Introduction According to the Nuclear Decommissioning Authority (NDA), on 1 April 2016 the total amount of radioactive waste is estimated to be 4.77 million m 3 [1]. This increasing amount of waste material, which needs to be stored, treated, and disposed of in a proper manner, presents a great number of technical and temporal challenges for the nuclear industry [2]. In the decommissioning site at Cumbria, United Kingdom, the majority of the country's nuclear wastes are currently stored on site. Depending on the type and radioactivity, varying storage strategies are applied within this site. One of the legacy storage facilities at Sellafield requires extraction of the historic Magnox Swarf, which is followed by packaging the material for interim storage before the final processing of the geological disposal. Over many decades of interim storage, a monitoring system needs to be implemented in order to predict the correct chemical evolution of the waste, which is mainly affected by the release of hydrogen gases and heat dissipation. This requires an assurance monitoring scheme in order to make sure that the hydrogen emissions and temperature of radioactive waste are well within the accepted parameter ranges. The Internet of Things (IoT) is composed of numerous inter-related and interconnected devices, machines, and objects sharing data over a network aimed at reaching a common goal [3]. Being an enabling technology of Industrial Revolution 4.0, the goal of the IoT is to allow things and objects to be connected anytime and anywhere with anyone using any The sensors used in the project are in the MQ family along with the Bosch Sensortec BME680. To achieve the sensing and monitoring function, the system is constructed using two solar panels for energy harvesting with a chargeable battery and a power management circuit, a hydrogen sensor MQ-8, a multifunction environment sensor BME680, a Wi-Fi Module ESP8266EX, and a Wemos organic light-emitting diode (OLED) display. The circuit connectivity can be seen in Figure 2. The power lines deliver the electricity from the energy harvesting and the power management module to the sensors and the communication module, which is shown as blue lines in Figure 2. The environment data are collected in two forms: analogue data and digital data. The digital data are transmitted via I 2 C buses, shown as brown lines. The Serial Clock Line (SCL) is sent by ESP8266EX as a clock signal and the Serial Data Line (SDA) as a bidirectional data signal. The analogue data are collected directly using the ADC (Analog to Digital) port of the microcontroller, which is shown as pink lines. Then, the ESP8266EX module can send the data to the base station via self-contained Wi-Fi protocol. The specifications of the components are introduced in the following section. Sensor Integration The power management is carried out by the solar panels and stored in the rechargeable battery for the operation of the entire system. The environmental data is collected by the Bosch Sensortec BME680, which is an environment sensor for gas, humidity, temperature, and pressure detection, and by MQ-8 sensors, and is transmitted to the local area network via Wi-Fi by ESP8266EX. Sensor Selection Using a single sensor not only simplifies the system design but also reduces the energy consumption for environmental monitoring. However, there is no integrated sensor commercially available for sensing hydrogen gas together with other environmental parameters. Thus, the two sensors used in this system, namely MQ-8 and BME680, are suitable for monitoring the temperature and hydrogen release rate to prove the concept. The sensors used in the project are in the MQ family along with the Bosch Sensortec BME680. To achieve the sensing and monitoring function, the system is constructed using two solar panels for energy harvesting with a chargeable battery and a power management circuit, a hydrogen sensor MQ-8, a multifunction environment sensor BME680, a Wi-Fi Module ESP8266EX, and a Wemos organic light-emitting diode (OLED) display. The circuit connectivity can be seen in Figure 2. The power lines deliver the electricity from the energy harvesting and the power management module to the sensors and the communication module, which is shown as blue lines in Figure 2. The environment data are collected in two forms: analogue data and digital data. The digital data are transmitted via I 2 C buses, shown as brown lines. The Serial Clock Line (SCL) is sent by ESP8266EX as a clock signal and the Serial Data Line (SDA) as a bidirectional data signal. The analogue data are collected directly using the ADC (Analog to Digital) port of the microcontroller, which is shown as pink lines. Then, the ESP8266EX module can send the data to the base station via self-contained Wi-Fi protocol. The specifications of the components are introduced in the following section. MQ-8 is a H2 detection sensor selected for its high sensitivity and is also able to detect a variety of other hydrogen-containing gases (e.g., nitrogen or air) [16]. The sensor uses stannic oxide (SnO2) as the sensing material, which has lower electrical conductivity within clean air. When the concentration of hydrogen gas increases, the conductivity of Sensor Integration The power management is carried out by the solar panels and stored in the rechargeable battery for the operation of the entire system. The environmental data is collected by the Bosch Sensortec BME680, which is an environment sensor for gas, humidity, temperature, and pressure detection, and by MQ-8 sensors, and is transmitted to the local area network via Wi-Fi by ESP8266EX. Sensor Selection Using a single sensor not only simplifies the system design but also reduces the energy consumption for environmental monitoring. However, there is no integrated sensor commercially available for sensing hydrogen gas together with other environmental parameters. Thus, the two sensors used in this system, namely MQ-8 and BME680, are suitable for monitoring the temperature and hydrogen release rate to prove the concept. MQ-8 is a H 2 detection sensor selected for its high sensitivity and is also able to detect a variety of other hydrogen-containing gases (e.g., nitrogen or air) [16]. The sensor uses stannic oxide (SnO 2 ) as the sensing material, which has lower electrical conductivity within clean air. When the concentration of hydrogen gas increases, the conductivity of the material increases. Then, the variation in gas concentration within the environment can be converted from the conductivity change using a simple circuit, as shown in Figure 3. A heater voltage (V H ) is introduced to supply a DC or an AC current to heat the sensor to working temperature. V C is a DC power source for converting the conductivity change to a voltage signal with the presence of a load resistance R L , which is in series with the sensor. V L is the voltage of load resistance R L and can be connected to an amplifier circuit as shown in Figure 3. MQ-8 is a H2 detection sensor selected for its high sensitivity and is also able to detect a variety of other hydrogen-containing gases (e.g., nitrogen or air) [16]. The sensor uses stannic oxide (SnO2) as the sensing material, which has lower electrical conductivity within clean air. When the concentration of hydrogen gas increases, the conductivity of the material increases. Then, the variation in gas concentration within the environment can be converted from the conductivity change using a simple circuit, as shown in Figure 3. A heater voltage (VH) is introduced to supply a DC or an AC current to heat the sensor to working temperature. VC is a DC power source for converting the conductivity change to a voltage signal with the presence of a load resistance RL, which is in series with the sensor. VL is the voltage of load resistance RL and can be connected to an amplifier circuit as shown in Figure 3. BME680 is selected as a 4-in-1-sensor that can measure barometric pressure, relative humidity, ambient temperature, and gas concentration. The package is just 3.0 × 3.0 × 0.93 mm 3 , enabling a compact design. Another advantage of this sensor is its low temperature coefficient offset (TCO) [17]. The sensor provides digital data via I 2 C or a Serial Peripheral Interface (SPI) and operates with low current consumption (microamps) at a sampling rate of 1 Hz. BME680 is selected as a 4-in-1-sensor that can measure barometric pressure, relative humidity, ambient temperature, and gas concentration. The package is just 3.0 × 3.0 × 0.93 mm 3 , enabling a compact design. Another advantage of this sensor is its low temperature coefficient offset (TCO) [17]. The sensor provides digital data via I 2 C or a Serial Peripheral Interface (SPI) and operates with low current consumption (microamps) at a sampling rate of 1 Hz. Power Management The sensor circuit can be operated at a very small current (approximately 5-20 mA) and 5 V DC, however, most of the current in this case is consumed by the heater circuit (around 45 mA). The used photovoltaics panel is able to deliver about 6.2 V and up to 300 mA, which is connected as an unregulated power source, through the battery to the ground and voltage input. Hence, a built-in regulator is employed to supply the sensor with constant 5 V. The DC-DC converter that is already embedded in the ESP8266EX Board converts the circuit voltage to 3.3 V. The power of the circuit (without the Hydrogen sensor) is 0.1 W. Therefore, the energy for one day is about 2.4 Wh/day, assuming that the power is on for 24 h. The MQ-8 sensor uses 0.5 W of power due to the embedded heater; hence, the energy for one day is 12 Wh/day. In short, the total power of the circuit including the hydrogen sensor MQ-8 is 0.6 W, and the total maximum energy consumption is 14.4 Wh/day. Communication Method and Display There are several wireless communication protocols supporting wireless sensing and monitoring, e.g., Bluetooth and ZigBee. The advantage of Wi-Fi is that the sensor can be directly connected to a LAN and update the data to either a local control center or a remote one through the Internet. It is also supported by ThingSpeak TM [18] to aggregate, visualize, and analyze live data streams in the cloud. The Wi-Fi Module based on an ESP8266EX chip offers a complete Wi-Fi networking solution and is also used as a controller for the sensors and the OLED display, which is the Wemos Mini D1 OLED. This module was used as a display module with the Wi-Fi Module based on the ESP8266EX, providing a user-friendly setup and debugging interface. A serial communication protocol with a baud rate of 115,200 is set and the device is connected to the Internet with a pre-set Wi-Fi Service Set Identifier (SSID) and password. Rather than implementing a classical PC-based user interface, our aim is to let the user access the data everywhere and, at the same time, increase the portability of the complete system. With a ThingSpeak private account and an Application Programming Interface (API) address, a Transmission Control Protocol connection is then established between the ESP8266EX and ThingSpeak cloud and the data can be monitored on the ThingSpeak API website, as shown in Figure 4. The sensor data shown are prior to calibration. Power Management The sensor circuit can be operated at a very small current (approximately 5-20 mA) and 5 V DC, however, most of the current in this case is consumed by the heater circuit (around 45 mA). The used photovoltaics panel is able to deliver about 6.2 V and up to 300 mA, which is connected as an unregulated power source, through the battery to the ground and voltage input. Hence, a built-in regulator is employed to supply the sensor with constant 5 V. The DC-DC converter that is already embedded in the ESP8266EX Board converts the circuit voltage to 3.3 V. The power of the circuit (without the Hydrogen sensor) is 0.1 W. Therefore, the energy for one day is about 2.4 Wh/day, assuming that the power is on for 24 h. The MQ-8 sensor uses 0.5 W of power due to the embedded heater; hence, the energy for one day is 12 Wh/day. In short, the total power of the circuit including the hydrogen sensor MQ-8 is 0.6 W, and the total maximum energy consumption is 14.4 Wh/day. Communication Method and Display There are several wireless communication protocols supporting wireless sensing and monitoring, e.g., Bluetooth and ZigBee. The advantage of Wi-Fi is that the sensor can be directly connected to a LAN and update the data to either a local control center or a remote one through the Internet. It is also supported by ThingSpeak TM [18] to aggregate, visualize, and analyze live data streams in the cloud. The Wi-Fi Module based on an ESP8266EX chip offers a complete Wi-Fi networking solution and is also used as a controller for the sensors and the OLED display, which is the Wemos Mini D1 OLED. This module was used as a display module with the Wi-Fi Module based on the ESP8266EX, providing a user-friendly setup and debugging interface. A serial communication protocol with a baud rate of 115,200 is set and the device is connected to the Internet with a pre-set Wi-Fi Service Set Identifier (SSID) and password. Rather than implementing a classical PC-based user interface, our aim is to let the user access the data everywhere and, at the same time, increase the portability of the complete system. With a ThingSpeak private account and an Application Programming Interface (API) address, a Transmission Control Protocol connection is then established between the ESP8266EX and ThingSpeak cloud and the data can be monitored on the ThingSpeak API website, as shown in Figure 4. The sensor data shown are prior to calibration. Sensor Calibration The sensitivities of most sensors degrade over long-term operation when meeting the actual performance criterion. Hence, calibration is required to make the acquired sensor dataset more accurate. Generally, the result is ambiguous when the sample exerts more than one gas if it can sense both of the gases. To differentiate or to sense in a more efficient and better way, a new and advanced sensor is needed. The MQ-8 sensor is designed in such Sensors 2021, 21, 214 6 of 12 a way that it can recognize the number of gases simultaneously. The MQ family sensors are capable of measuring the concentration and substances that co-exist in a mixture. In the calibration process, we used a voltage sensor to adapt a voltage result of the gas sensor [19]. The first formula is used in [20], showing it is nonlinear for the sensor with gas concentration (1): where R is the sensor resistance, R 0 is the sensor resistance, C gas the concentration of the used gas, β is the law of the characteristic power of the sensor, and k gas is the gas constant. The formula shows a power function with a negative exponent as: According to [20], the formula for measuring the clean air resistance with a known supply voltage V CC and a load resistance R L of 10 kΩ can be found as: where R L = Load resistance V CC = Sensor supply voltage V out = Output voltage Therefore, the sensor resistance R 0 can be determined by the ratio of clean air resistance and air ratio as stated as below: R clean_air is the sensor reference resistance for clean air. air ratio = 9.56 and represents a constant of the MQ-8 sensor. If ppm is the gas concentration in parts per million, then according to nonlinear regression, the output equation of the sensor is: log(ppm) = logm + nlog R gas R 0 (6) In the equations, R gas is the sensor resistance in the presence of gas. The subscript gas stands for certain gases where H 2 is hydrogen. From the datasheet, there is no formula provided for each gas type of MQ sensor. Using the datasheet's graphical representations [16], we extracted the formula of gas and this is shown in Figure 5. We used the MQ-8 sensor to extract the points on the graph (log ppm H 2 = logm + nlog( R H 2 R 0 )) for H 2 . A set of points was extracted using WebPlotDigitizer to get a mathematical model that matches the data. Figure 6a shows the sensitivity curve, which shows the V RL in hydrogen with different concentrations and that the resistance load R L is 10 kΩ, and Figure 6b shows the long-term stability curve. The response graphs of the sensor provided by the manufacturer are plotted under standard conditions [16]. It provides the baseline of the characterizations for different practical applications, in this case, nuclear waste storage. We used the MQ-8 sensor to extract the points on the graph (l = + ( )) for H2. A set of points was extracted using WebPlotDigitizer to get a mathematical model that matches the data. Figure 6a shows the sensitivity curve, which shows the VRL in hydrogen with different concentrations and that the resistance load RL is 10 kΩ, and Figure 6b shows the long-term stability curve. The response graphs of the sensor provided by the manufacturer are plotted under standard conditions [16]. It provides the baseline of the characterizations for different practical applications, in this case, nuclear waste storage. Similarly, the formulas, shown in Table 1, can be found experimentally by properly calibrating the MQ-8 sensor for a 1000 ppm H2 concentration in air and a value of RL of about 10 KΩ (5 KΩ to 33 KΩ). We used the MQ-8 sensor to extract the points on the graph (l ( 2 ) = + ( 2 0 )) for H2. A set of points was extracted using WebPlotDigitizer to get a mathematical model that matches the data. Figure 6a shows the sensitivity curve, which shows the VRL in hydrogen with different concentrations and that the resistance load RL is 10 kΩ, and Figure 6b shows the long-term stability curve. The response graphs of the sensor provided by the manufacturer are plotted under standard conditions [16]. It provides the baseline of the characterizations for different practical applications, in this case, nuclear waste storage. Similarly, the formulas, shown in Table 1, can be found experimentally by properly calibrating the MQ-8 sensor for a 1000 ppm H2 concentration in air and a value of RL of about 10 KΩ (5 KΩ to 33 KΩ). Similarly, the formulas, shown in Table 1, can be found experimentally by properly calibrating the MQ-8 sensor for a 1000 ppm H 2 concentration in air and a value of R L of about 10 KΩ (5 KΩ to 33 KΩ). Experimental Setup The system is designed to perform under harsh environmental conditions and, in particular, the system is supposed to be deployed within the nuclear site. The sensors have been designed to be attached to legacy nuclear waste containers, which have an inner temperature between~22 and 55 • C at normal room temperature. Additionally, the containers have a relatively low external contact radiation dose rate that does not cause any signal or data transmission interference [21]. However, the system at this stage has not been designed to be exposed to radiation, as it is in the conceptual design phase. Therefore, no shielding is required, and neither is a communication protocol with loss detection and retransmission. A 3D model rendering image is shown in Figure 7, which depicts the stacking up of the legacy nuclear waste containers within the storage facilities. The containers have four filters on the lid to vent hydrogen gas to the ambient. The sensors can be fixed in the vicinity of the filters. Therefore, to replicate the scenario in a laboratory-based experiment, a small stainless steel cubic box was used for the validation of the multidetector sensor's performance within the metal's proximity. Experimental Setup The system is designed to perform under harsh environmental conditions and, in particular, the system is supposed to be deployed within the nuclear site. The sensors have been designed to be attached to legacy nuclear waste containers, which have an inner temperature between ~22 and 55 °C at normal room temperature. Additionally, the containers have a relatively low external contact radiation dose rate that does not cause any signal or data transmission interference [21]. However, the system at this stage has not been designed to be exposed to radiation, as it is in the conceptual design phase. Therefore, no shielding is required, and neither is a communication protocol with loss detection and re-transmission. A 3D model rendering image is shown in Figure 7, which depicts the stacking up of the legacy nuclear waste containers within the storage facilities. The containers have four filters on the lid to vent hydrogen gas to the ambient. The sensors can be fixed in the vicinity of the filters. Therefore, to replicate the scenario in a laboratorybased experiment, a small stainless steel cubic box was used for the validation of the multidetector sensor's performance within the metal's proximity. Two scenarios are investigated: • Scenario 1: Hot air flow test-pressure, gas, humidity, and temperature change inside and outside the stainless steel box under a hot air flow to monitor the temperature variation; • Scenario 2: Hydrogen flow test-pressure, gas, humidity, and temperature change when a hydrogen flow is introduced into the stainless steel box to sense the hydrogen concentration. In scenario 1, the temperature change is undertaken with a hot air flow passing through the surface of the sensor. Two conditions are tested individually: outside and inside the stainless steel box, and on the cap of the stainless steel bottle. The hot air flow is induced by a DEWORX Original 2000 W Hot Air Gun, which has two heating levels (600 °C and 300 °C) and two flow rate levels (300 L/min and 500 L/min). For both the inside and outside testing, the hot air gun is set to a fixed flow rate and held at a fixed distance above the sensor with the hot air flow directly pointing to the sensor. This can be seen in Figure 8a. In scenario 2, a hydrogen gas bottle is used to generate a hydrogen stream in the metal container and increase the hydrogen concentration level inside. The stream is produced at ambient temperature and guided into the metal box. This is shown in Figure 8b. Two scenarios are investigated: • Scenario 1: Hot air flow test-pressure, gas, humidity, and temperature change inside and outside the stainless steel box under a hot air flow to monitor the temperature variation; • Scenario 2: Hydrogen flow test-pressure, gas, humidity, and temperature change when a hydrogen flow is introduced into the stainless steel box to sense the hydrogen concentration. In scenario 1, the temperature change is undertaken with a hot air flow passing through the surface of the sensor. Two conditions are tested individually: outside and inside the stainless steel box, and on the cap of the stainless steel bottle. The hot air flow is induced by a DEWORX Original 2000 W Hot Air Gun, which has two heating levels (600 • C and 300 • C) and two flow rate levels (300 L/min and 500 L/min). For both the inside and outside testing, the hot air gun is set to a fixed flow rate and held at a fixed distance above the sensor with the hot air flow directly pointing to the sensor. This can be seen in Figure 8a. The environmental parameters are monitored during the process for analyzing the influence of the hydrogen concentration increase. Figure 9 below shows the overall sensor system prototype. The total size of the prototype is only 3 cm high, 6 cm long, and 1.5 cm wide with the PTFE casing, which has a In scenario 2, a hydrogen gas bottle is used to generate a hydrogen stream in the metal container and increase the hydrogen concentration level inside. The stream is produced at ambient temperature and guided into the metal box. This is shown in Figure 8b. The environmental parameters are monitored during the process for analyzing the influence of the hydrogen concentration increase. Figure 9 below shows the overall sensor system prototype. The total size of the prototype is only 3 cm high, 6 cm long, and 1.5 cm wide with the PTFE casing, which has a temperature tolerance of up to 250 • C, used to protect the electrical components. Figure 9 below shows the overall sensor system prototype. The total size of the prototype is only 3 cm high, 6 cm long, and 1.5 cm wide with the PTFE casing, which has a temperature tolerance of up to 250 °C, used to protect the electrical components. Results and Discussion This section evaluates the performance of the proposed system for analyzing the selected parameters under two different scenarios, which are explained below. The results obtained were processed in MATLAB and the varying characteristics of the sensor system are presented. Scenario 1: Hot Air Flow Test The results were selected from 100 s before switching on the heater gun and then 280 s after, to give a steady reading. Ten individual tests were carried out and the results were averaged statistically. The divisions of different tests were calculated as error bars. To compare the trend of different parameters, these results were normalized by subtracting the average and then dividing by the standard division of the data set. The results are shown in Figure 10. The results show that the temperature increases when the hot air gun switches on and starts to fall after the gun is switched off. The pressure and gas results show the same trend as the temperature change, as the pressure drops more quickly as the gun switches off, followed by the gas sensor results. It takes a longer time for the temperature to reset to a normal status due to the residual heat compared with pressure and gas. The humidity decreases during the increase in temperature and increases when the temperature goes Results and Discussion This section evaluates the performance of the proposed system for analyzing the selected parameters under two different scenarios, which are explained below. The results obtained were processed in MATLAB and the varying characteristics of the sensor system are presented. Scenario 1: Hot Air Flow Test The results were selected from 100 s before switching on the heater gun and then 280 s after, to give a steady reading. Ten individual tests were carried out and the results were averaged statistically. The divisions of different tests were calculated as error bars. To compare the trend of different parameters, these results were normalized by subtracting the average and then dividing by the standard division of the data set. The results are shown in Figure 10. The results show that the temperature increases when the hot air gun switches on and starts to fall after the gun is switched off. The pressure and gas results show the same trend as the temperature change, as the pressure drops more quickly as the gun switches off, followed by the gas sensor results. It takes a longer time for the temperature to reset to a normal status due to the residual heat compared with pressure and gas. The humidity decreases during the increase in temperature and increases when the temperature goes down. This is because the humidity sensor measures the relative humidity of the surrounding environment. When the temperature increases, the saturate water vapor pressure increases, which decreases the relative humidity when the water vapor mass in the air remains the same. The results also suggest that the recovery of the humidity is slower than that of the temperature. This is because the humidity sensor has a response time of 8 s, which means it takes 8 s to reach 63% of the total humidity change. For the test results inside the stainless steel box, the hot air gun is switched on from 0 and switched off at 100 s. The trend in temperature, humidity, and pressure changes during the heating and cooling process are similar to the outside results, but in a slower path. This is reasonable because the interaction between the inner environments is either through the small window or via the heat interchange through the metal wall. Notably, the error bar between different tests is also smaller than the outside results for temperature and pressure measurement. It suggests that the inside measurement is more stable. down. This is because the humidity sensor measures the relative humidity of the surrounding environment. When the temperature increases, the saturate water vapor pressure increases, which decreases the relative humidity when the water vapor mass in the air remains the same. The results also suggest that the recovery of the humidity is slower than that of the temperature. This is because the humidity sensor has a response time of 8 s, which means it takes 8 s to reach 63% of the total humidity change. For the test results inside the stainless steel box, the hot air gun is switched on from 0 and switched off at 100 s. The trend in temperature, humidity, and pressure changes during the heating and cooling process are similar to the outside results, but in a slower path. This is reasonable because the interaction between the inner environments is either through the small window or via the heat interchange through the metal wall. Notably, the error bar between different tests is also smaller than the outside results for temperature and pressure measurement. It suggests that the inside measurement is more stable. Scenario 2: Hydrogen Flow Test This test shows the hydrogen detection characteristics in comparison with the other parameters. In the test, the hydrogen stream started to blow inside the stainless steel from 150 s and the parameter results were recorded to 240 s. The results for the hydrogen flow test are shown in Figure 11, which is used for comparing the parameter change during the increase in hydrogen concentration with other parameters. As the parameters are in different scales, they must be normalized to fit in one figure for comparison as shown in Equation (8) below: where P stands for the parameter value, P is the mean of the parameter value in one test, and σ is the standard division of the parameter value in one test. increase in hydrogen concentration with other parameters. As the parameters are in different scales, they must be normalized to fit in one figure for comparison as shown in Equation (8) below: where P stands for the parameter value, ̅ is the mean of the parameter value in one test, and is the standard division of the parameter value in one test. It is shown that the hydrogen concentration level starts to increase as the stream starts, which shows that the hydrogen sensor is working properly. At the same time, it shows a decrease in all other parameters. The decrease in temperature can be explained by the temperature decrease, which happens when the pressurized gas is released from the bottle. As there is no water content in the hydrogen stream, the humidity also decreased during the process. The pressure seems to decrease but is actually increasing and decreasing due to environmental influence. Conclusions and Future Work In this paper, we have presented a conceptual multisensor system that is able to monitor the environmental parameters for nuclear waste storage containers or other harsh environmental applications. The proposed system architecture breaks down the IoT structure into functional blocks and provides simple means of modularity between the layers. The approach is cost-effective, consumes less power, and is faster with respect to developing a new silicon sensor device. The research also describes the calibration methods for the MQ family of gas sensors for detecting leakage of H2 along with the radiation-tolerant environmental monitoring sensor BME680. The formulae for calibration of this type of sensor have been extracted from the data representation of the sensitivity curve. The sensor performs with higher sensitivity towards the desired parameter It is shown that the hydrogen concentration level starts to increase as the stream starts, which shows that the hydrogen sensor is working properly. At the same time, it shows a decrease in all other parameters. The decrease in temperature can be explained by the temperature decrease, which happens when the pressurized gas is released from the bottle. As there is no water content in the hydrogen stream, the humidity also decreased during the process. The pressure seems to decrease but is actually increasing and decreasing due to environmental influence. Conclusions and Future Work In this paper, we have presented a conceptual multisensor system that is able to monitor the environmental parameters for nuclear waste storage containers or other harsh environmental applications. The proposed system architecture breaks down the IoT structure into functional blocks and provides simple means of modularity between the layers. The approach is cost-effective, consumes less power, and is faster with respect to developing a new silicon sensor device. The research also describes the calibration methods for the MQ family of gas sensors for detecting leakage of H 2 along with the radiation-tolerant environmental monitoring sensor BME680. The formulae for calibration of this type of sensor have been extracted from the data representation of the sensitivity curve. The sensor performs with higher sensitivity towards the desired parameter characterization in a preset environment. For both the experiments of temperature monitoring and H 2 flow testing, the measured parameters from the proposed sensor demonstrate the expected results. The current system is self-powered by a solar cell from which the energy is stored in a researchable battery. Other energy-harvesting techniques for indoor storage facilities, e.g., from gamma radiation, ambient and transmitted radio frequency, conducted heat from nearby containers, or a resonant magnetic field, should be investigated in the future. In addition, the signal interference and electronics shielding in a radiation environment should also be studied. The robust sensing system provides an alternative way for in-situ online monitoring of gaseous changes and future work will include the design of a radiation-shielded case for sensors that can be readily deployed in a harsh environment. The work will enable the installation and routine operation of a permanent monitoring solution for legacy nuclear waste containers or any other radioactive waste containers, in order to support the decommissioning strategy in the country. This paper has demonstrated the possibility of using the IoT technology for inventory management in nuclear industries.
8,321.6
2020-12-31T00:00:00.000
[ "Computer Science" ]
Connected-SegNets: A Deep Learning Model for Breast Tumor Segmentation from X-ray Images Simple Summary The segmentation of breast tumors is an important step in identifying and classifying benign and malignant tumors in X-ray images. Mammography screening has proven to be an effective tool for breast cancer diagnosis. However, the inspection of breast mammograms for early-stage cancer can be a challenging task due to the complicated structure of dense breasts. Several deep learning models have been proposed to overcome this particular issue; however, the false positive and false negative rates are still high. Hence, this study introduced a deep learning model, called Connected-SegNets, that combines two SegNet architectures with skip connections to provide a robust model to reduce false positive and false negative rates for breast tumor segmentation from mammograms. Abstract Inspired by Connected-UNets, this study proposes a deep learning model, called Connected-SegNets, for breast tumor segmentation from X-ray images. In the proposed model, two SegNet architectures are connected with skip connections between their layers. Moreover, the cross-entropy loss function of the original SegNet has been replaced by the intersection over union (IoU) loss function in order to make the proposed model more robust against noise during the training process. As part of data preprocessing, a histogram equalization technique, called contrast limit adapt histogram equalization (CLAHE), is applied to all datasets to enhance the compressed regions and smooth the distribution of the pixels. Additionally, two image augmentation methods, namely rotation and flipping, are used to increase the amount of training data and to prevent overfitting. The proposed model has been evaluated on two publicly available datasets, specifically INbreast and the curated breast imaging subset of digital database for screening mammography (CBIS-DDSM). The proposed model has also been evaluated using a private dataset obtained from Cheng Hsin General Hospital in Taiwan. The experimental results show that the proposed Connected-SegNets model outperforms the state-of-the-art methods in terms of Dice score and IoU score. The proposed Connected-SegNets produces a maximum Dice score of 96.34% on the INbreast dataset, 92.86% on the CBIS-DDSM dataset, and 92.25% on the private dataset. Furthermore, the experimental results show that the proposed model achieves the highest IoU score of 91.21%, 87.34%, and 83.71% on INbreast, CBIS-DDSM, and the private dataset, respectively. Introduction The United States of America reported a total of 43,250 female deaths and 530 male deaths due to breast cancer in 2022 [1]. Researchers are motivated by these statistics to develop accurate tools for early breast cancer diagnosis, which will offer physicians more options for treatment. Mammograms are still being widely used to detect the presence of any abnormalities in breasts [2][3][4]. Mammogram images show different types of breast tissues as pixel clusters with different intensities [5]. These tissues include fiber-glandular, fatty, and pectoral muscle tissues [6]. On mammography, abnormal tissues such as lesions, tumors, lumps, masses, or calcifications may be indicators of breast cancer [7,8]. However, there is always the possibility of human error when analyzing and diagnosing breast cancer due to dense breasts and the high variability between patients [9][10][11]. Additionally, mammography screening sensitivity is affected by image quality and radiologist experience [12,13]. Automated techniques are being developed to analyze and diagnose breast mammograms with the goal of counteracting this variability and standardizing diagnostic procedures [14,15]. The rapid emergence of artificial intelligence (AI) and deep learning (DL) has significant implications for breast cancer diagnosis [16][17][18]. The advancements in image segmentation using convolutional neural networks (CNNs) have been applied to segment breast cancer from X-ray images [19][20][21][22][23]. The earlier works on mass segmentation faced some challenges, such as low signal to noise ratio, indiscernible mass boundaries, high false positives, and high false negative rates. To address these challenges, one study proposed a deeply supervised UNet model (DS U-Net) coupled with dense conditional random fields (CRFs) for lesion segmentation from whole mammograms [19]. The DS U-Net model has produced a Dice score of 79% on the INbreast dataset and 83% on the CBIS-DDSM dataset, whereas its IoU score is 83% and 86% on the INbreast and CBIS-DDSM datasets, respectively. Another study [20] proposed an attention-guided dense up-sampling network (AU-Net) for accurate breast mass segmentation from mammograms. An asymmetrical encoder-decoder structure is employed in this AU-Net and it uses an effective up-sampling block and attention-guided dense up-sampling block (AU block). The AU block is designed to have three merits. First, dense upsampling compensates for the information loss experienced during bilinear up-sampling. Second, it integrates high-and low-level features more effectively. Third, it highlights channels with rich information via the channel attention function. Compared to the state-of-the-art FCNs, AU-Net achieved the best performance, with a Dice score of 90% on the INbreast dataset and 89% on the CBIS-DDSM dataset. However, such models do not capture the features of different scales of masses effectively, and therefore they suffer from low segmentation accuracy. Hence, a new model, called UNet, was presented to mitigate the limitations of the previous models [21]. UNet integrates the high-level features of the encoder with the low-level features of the decoder. Through skip connections, the UNet architecture was able to maintain this form of fusion for a variety of medical applications. The UNet architecture achieves better performance on different biomedical segmentation applications. Asma Baccouche et al. [22] introduced Connected-UNets to segment breast masses. This method integrated atrous spatial pyramid pooling (ASPP) in the two standard UNets. The architecture of Connected-UNets was built on the attention network (AUNet) and residual network (ResUNet). To augment and enhance the images, cycle-consistent generative adversarial networks (CycleGANs) were used between two unpaired datasets. Additionally, a regional deep learning approach called you-only-look-once (YOLO) has been used to detect breast lesions from mammograms. Finally, a full-resolution convolutional network (FrCN) has been implemented to segment breast lesions. The Connected-UNets model has produced a Dice score of 94% and 92% on the INbreast and CBIS-DDSM datasets, respectively. Moreover, it has achieved an IoU score of 90% and 86% on INbreast and CBIS-DDSM, respectively. Badrinarayanan et al. [23] proposed a practical deep fully convolutional neural network architecture for semantic pixel-wise segmentation, termed SegNet. Its segmentation architecture consists of an encoder network and a decoder network followed by a pixel-wise classification layer. Topologically, the architecture of the encoder network matches that of the 13 convolutional layers in the VGG16 network. The role of the decoder network is to map the low-resolution encoder feature maps to full-input-resolution feature maps for pixel-wise classification. The SegNet model has achieved satisfactory segmentation performance. However, since the SegNet architecture does not consist of skip connections, incorporating fine multiscale information during the training process is challenging. This study combines the characteristics of the Connected-UNets and SegNet models to form Connected-SegNets from two standard SegNets with skip connections for breast tumor segmentation from breast mammograms. The flow chart of the proposed system is illustrated in Figure 1. The major contributions of this study include the following. 1. This study proposes a deep learning model called Connected-SegNets for breast tumor segmentation from X-ray images. 2. The proposed model, Connected-SegNets, is designed using skip connections, which helps to recover the spatial information lost during the pooling operations. 3. The original SegNet cross-entropy loss function has been replaced by the IoU loss function to overcome any noisy features and enhance the detection of the false negative and false positive cases. 4. The histogram equalization method of the contrast limit adapt histogram equalization (CLAHE) is applied to all datasets to enhance the compressed areas and smooth the pixel distribution. 5. Image augmentation methods including rotation and flipping have been used to increase the number of training data and to reduce the impact of overfitting. The rest of this paper is organized as follows. Section 2 describes the datasets and architectural details of the proposed method. Section 3 presents the experimental results. Section 4 discusses the merits of this study. Finally, the article is concluded with its primary findings in Section 5. Materials and Methods This research uses the two publicly available datasets of INbreast and CBIS-DDSM, and one private dataset obtained from Cheng Hsin General Hospital in Taiwan. Initially, a histogram equalization, CLAHE, is applied to all datasets to enhance the compressed areas and smooth the pixel distribution. Then, each X-ray dataset is randomly divided into 70%, 15%, and 15% for training, validation, and testing, respectively. Finally, the training and validation samples are augmented to increase the amount of data before feeding them to the proposed Connected-SegNets model. Datasets The proposed model, Connected-SegNets, has been evaluated on the following datasets. INbreast Dataset The INbreast dataset is a collection of mammograms from Centro de Mama Hospital de S. João, Breast Centres Network, Porto, Portugal. A total of 410 images with 115 cases were collected from August 2008 to July 2010 [24,25], and 95 of 115 cancer cases involved both breasts in women. Four different types of breast diseases are recorded in the database, including calcification, mass, distortions, and asymmetries. This database includes images from craniocaudal (CC) and mediolateral oblique (MLO) perspectives. Moreover, the breast density is divided into four categories according to the breast imaging reporting and data system (BI-RADS) assessment categories, which are: entirely fat (BI-RADS 1), scattered fibroglandular (BI-RADS 2), heterogeneously dense (BI-RADS 3), and extremely dense (BI-RADS 4). All the images were saved in two sizes: 3328 × 4084 or 2560 × 3328 pixels. Among the 410 mammograms, 107 images contain breast tumors. Hence, these 107 images were selected for this study. The 107 images were randomly split into 90 images for training and 17 images for testing, as shown in Table 1. The image augmentation methods, including rotation and flipping, were applied to the training data. The augmentation methods increased the number of breast tumor mammography images to 720 images. The 720 images were randomly split into 576 images for training data and 174 images for validation data, as shown in Table 2. CBIS-DDSM Dataset The DDSM is a public dataset provided by the University of South Florida Computer Science and Engineering Department, Sandia National Laboratories, and Massachusetts General Hospital [26]. The CBIS-DDSM is an updated and standardized version of the DDSM [27]. It contains a variety of pathologically verified cases, including malignant, benign, and normal cases. DDSM is an extremely useful database for the development and testing of computer-aided diagnosis (CAD) systems due to its scale and the ground truth validation it offers. The CBIS-DDSM collection includes a subset of the DDSM data organized by expert radiologists. It also comprises pathological diagnosis, bounding boxes, and region of interest (ROI) segmentation for training data. Among all mammography images with tumors in the CBIS-DDSM dataset, 838 images were selected for this study. The 838 images were randomly split into 728 images for training data and 110 images for testing data, as shown in Table 1. The image augmentation methods, including rotation and flipping, were applied to the training samples. Through image augmentation, the number of breast tumor mammography images was increased to 5824. The 5824 images were randomly split into 4659 images for training data and 1165 images for validation data, as shown in Table 2. Private Dataset The private dataset comprised mammography images from the Cheng Hsin General Hospital, Taipei City, Taiwan. Initially, VGG image annotator (VIA) software was used by an expert radiologist from the department of medical imaging to mark the tumor location based on the pathological data [28]. Then, all the labeled images were verified and confirmed by the department of hematology and oncology. Finally, the dataset was de-identified for patient privacy. A total of 196 mammography images were collected from January 2019 to December 2019. All the mammograms consist of tumors with a grade of breast imaging reporting and data system assessment category 4 (BIRADS 4) or higher. A total of 196 mammography images were randomly split into 148 images for training and 48 images for testing, as shown in Table 1. The image augmentation methods, including rotation and flipping, were applied to the training samples. Through image augmentation methods, the number of breast tumor mammography images was increased to 1184. The 1184 images were randomly split into 947 images for training and 237 images for validation, as shown in Table 2. Data Preprocessing This research study only focused on the segmentation step. Initially, the ROI of the tumor was cropped manually. The ROI of the tumor was resized into 256 × 256. In order to eliminate additional noise and degradation caused by the scanning process of digital X-ray mammography, all images were preprocessed [29,30]. Histogram Equalization Histogram equalization is a well-known technique widely used for contrast enhancement [31]. It is used in a variety of applications, including medical image processing and radar signal processing, due to its simple function and effectiveness [32][33][34][35]. Histogram equalization well distributes the pixels over the full dynamic intensity range. One drawback of histogram equalization is that the background noise can be increased when the image is too bright or too dark in the local area after the histogram equalization, which is mainly due to the flattening property of the histogram equalization. This study applied the local histogram equalization method called CLAHE to address the above challenges. CLAHE is an adaptive extension of histogram equalization. It helps in the dynamic preservation of the local contrast features of an image. CLAHE has been applied to all datasets of this study. The sample results on the datasets after applying the CLAHE are shown in Figure 2. From Figure 2, it is noted that the edges of the tumors became clearer after applying the CLAHE technique. A total of 107, 838, and 196 ROIs were obtained from the INbreast, CBIS-DDSM, and the private datasets, respectively. The complete details of the mammography datasets are listed in Table 1. Figure 2. Sample results after applying the histogram equalization (CLAHE) to random ROI images from the datasets. Image Augmentation The most common problem that DL models might face is the overfitting problem due to the limited amount of training samples [36][37][38]. As a result of overfitting, a model might detect or classify features derived from the training samples, but the same model will not be able to detect or classify features derived from unseen samples. To address the issue of overfitting, this study has used two image augmentation methods, namely rotation and flipping. First, bi-linear interpolation has been used to rotate each image around its center point by a value of 90 • degrees counter-clockwise up to 360 • . By using the bi-linear interpolation method, the rotated image has the same aspect ratio as the original image, without losing any part of the image. Second, mirroring or flipping is the simplest augmentation approach. It results in a dataset with twice as many images. The flipping technique is basically the same as the rotation technique; however, it transforms rotation in the reverse direction. The sample results on the datasets after applying the augmentation methods are shown in Figure 3. Rotation ( =°) Horizontal Flipping Original ROI The raw ROIs of the training data were augmented by rotating at an angle of 90 • and horizontal flipping. Hence, a total of 720, 5824, and 1184 ROIs were generated from the INbreast, CBIS-DDSM, and private datasets, respectively. Then, the data were randomly split into training and validation. Detailed information of the mammography datasets in terms of the training data is provided in Table 2. Proposed Model SegNet can record pooling indices when applying Max pooling. These pooling indices are used to up-sample the images to the original size. Hence, the required graphics processing unit (GPU) memory for training the model can be lower. Inspired by the success of SegNet and Connected-UNets, this research proposed a model, called Connected-SegNets, which connects two standard SegNets using additional adapted skip connections. The overall architecture of the proposed Connected-SegNets model is shown in Figure 4. The proposed model consists of two encoder and two decoder networks. The first decoder network and the second encoder network are connected with additional skip connections after cascading a second SegNet. This helps to recover the fine-grained features that are lost in the encoding of the SegNet and apply them to encode the high-resolution features by connecting them to the previously decoded features. The proposed Connected-SegNets architecture is deepened by stacking two SegNets. The upper half of the proposed architecture is similar to SegNet, which uses the first 13 convolutional layers in the VGG16 network as the encoder network [39]. In the decoder network, the last convolutional layer is removed. Each encoder network comprises two convolutional kernels, which includes 3 × 3 convolutional layers followed by an activation rectified linear unit (ReLU) and a batch normalization (BN) layer. Then, a maximum pooling indices operation is applied to the output of each encoder network before passing the information to the next encoder. Each decoder network consists of a 2 × 2 transposed convolution unit that is concatenated with the previous encoder output, and then the result is fed into two convolution blocks, which consist of 3 × 3 convolutions followed by an activation ReLU and a BN layer. Additionally, a second SegNet is attached to the first SegNet through new skip connections that use information from the first up-sampling pathway. The result of the last decoder block is concatenated with the same result after being fed into a 3 × 3 convolution layer followed by an activation ReLU and a BN layer. This serves as the input of the first encoder network to the second SegNet. The output of the maximum pooling indices operations of each of the three encoder networks is fed into 3 × 3 convolution layers and then concatenated with the output of the last previous decoder network. The result is next down-sampled to the next encoder network. Finally, the last output is given to a dilation layer with a dilation rate of 3, followed by an advanced ReLU activation layer to generate the predicted mask. In order to obtain more features, a dilation layer with a dilation rate of 3 is used in the last layer. Moreover, an activation ReLU limits the maximum value to 1, which is called an advanced ReLU. The details of the Connected-SegNets layers are listed in Table 3. Experimental Environment and Parameter Settings All experiments were performed using a PC with an Intel i7-9700K CPU, 55 GB of DDR4 RAM, and an NVIDIA GeForce RTX 2080Ti GPU with 11 GB of memory. The software environment used a Windows 10 64-bit operating system, python 3.8.12, CUDA 10.1, cuDNN 7.6.5, and TensorFlow 2.8.0. The learning rate was set to 0.0001 using the Adam optimizer [40] and the batch size was 4. The loss function was the IoU loss function. Evaluation Metrics In this research, precision, recall, IoU score, and Dice score evaluation metrics have been used to evaluate the proposed model based on the confusion matrix. The confusion matrix is an evaluation metric often used to evaluate classification, detection, and segmentation algorithms. The confusion matrix shows information about the true classes and the predicted classes. The true class and the predicted class can be positive or negative. The true negative (TN) case is when both the true case and the predicted case are tumors. False negatives (FN) occur when the true case is not a tumor, but the predicted case is. The false positive (FP) case occurs when the true case is a tumor while the prediction is a non-tumor. True positives (TP) occur when the actual case is non-tumor and the predicted case is tumor. The Dice score is also known as the F1-score, which represents the harmonic mean of precision and recall, as expressed in Equation (3). Additionally, the IoU evaluation metric represents the percentage of overlap between the predicted classes and the true classes, as represented in Equation (4). Results on INbreast Dataset The confusion matrix results of Connected-SegNets on the INbreast dataset are listed in Table 4. From the Table 4, it is observed that the proportion of actual tumors that was correctly identified as tumors (TP) by Connected-SegNets is 96%. This is the highest TP rate compared to the other datasets. In addition, the proportion of non-tumors that was correctly identified as non-tumors (TN) by Connected-SegNets is 88%. Results on CBIS-DDSM Dataset The identification results of Connected-SegNets on the CBIS-DDSM dataset are listed in Table 5. From the Table 5, it can be seen that the proportion of true tumors that was correctly identified as tumors (TP) by Connected-SegNets is 93%. Moreover, the proportion of non-tumors that was correctly identified as non-tumors (TN) by Connected-SegNets is 87%. Results on Private Dataset The results of the Connected-SegNets model on the private dataset are listed in Table 6. It is observed that the proportion of actual tumors that was correctly identified as tumors (TP) by Connected-SegNets is 92%. On the other hand, the proportion of tumors that were not tumors and were correctly identified as non-tumors (TN) by Connected-SegNets is 89%. This TN rate is considered to be the highest compared to other datasets. The accuracy and loss curves of the training and validation for Connected-SegNets are shown in Figures 5 and 6, respectively. It can be noted from Figures 5 and 6 that the training and validation curves behave similarly, which is an indication that the proposed Connected-SegNets can be generalized and does not suffer from overfitting. A large number of epochs might cause a deep learning model to overfit the data, whereas a small number of epochs can lead to smooth convergence. Therefore, the early stop technique has been utilized during the model training to avoid overfitting. The validation dataset is used to track the model training performance. The early stop method can help to set a suitable training epoch by tracking the best performance on the validation dataset. Therefore, when the validation performance stops improving, an early stop mode of the training process will be activated. Moreover, using the early stop algorithm not only can avoid the overfitting problem, but it also can help with choosing the optimal hyperparameter configurations for training the model. The early stop algorithm steps are shown in Algorithm 1. In this research, the validation tracking, ActStepSetting, was set to 20 iterations. Hence, if the validation performance did not improve after 20 iterations, the training was stopped automatically. Comparison of Segmentation Results As shown in Table 7, the segmentation results of each testing datum were evaluated by the two evaluation metrics, Dice score and IoU score, for the segmented maps per pixel, and compared with the original ground truth. It is noted that the proposed Connected-SegNets model produced the highest Dice score of 96.34%, 92.86%, and 92.25% on the INbreast, CBIS-DDSM, and private datasets, respectively. Moreover, the proposed model achieved the highest IoU Score of 91.21%, 87.34%, and 83.71% on the INbreast, CBIS-DDSM, and private datasets, respectively. Finally, the comparative results show that the proposed model, Connected-SegNets, outperformed the related models in terms of Dice score and IoU score on the three datasets. Figure 7 shows some examples of the segmented ROI results generated by different models against their ground truth images. It is clearly observed that the quality of the segmentation maps of the Connected-SegNets model contain less error and produce more precise segmentation compared to other methods. Discussion In recent years, several DL models have been developed and applied for breast tumor segmentation. These DL models have achieved remarkable success in segmenting breast tumors in mammograms. Nevertheless, many of these DL models produce high false positive and false negative rates [41]. The SegNet model is considered to be one of the deep learning models that is easy to modify and further optimize to provide better segmentation performance in different fields. Therefore, this study proposed a DL model, called Connected-SegNets, based on SegNet, for better breast tumor segmentation. The main goal of the proposed Connected-SegNets model is to improve the overall performance of breast tumor segmentation. Hence, several techniques have been implemented and incorporated into the proposed method in order to achieve this goal. These techniques include deepening the architecture with two SegNets, replacing the cross-entropy loss function of the standard SegNet with the IoU loss function, applying histogram equalization (CLAHE), and performing image augmentation. Figure 7 illustrates the segmentation results of AUNet, Standard UNet, Connected-UNets, Standard SegNet, and the proposed Connected-SegNets on the testing data of the INbreast, CBIS-DDSM, and private datasets. The segmentation results of the proposed Connected-SegNets are the closest to the ground truth compared to those of the AUNet, UNet, Connected-UNets, and SegNet models. The proposed model fully connects two single SegNets using additional skip connections. These are helpful to recover the spatial information that is lost during the pooling operations. Moreover, the IoU loss function leads to a more robust model. Furthermore, the histogram equalization (CLAHE) has been applied to smoothen the distribution of the image pixels for better pixel segmentation. Additionally, image augmentation methods, including rotation and flipping, have been applied to increase the number of training samples and reduce the impact of overfitting. This has led to more accurate segmentation performance compared to the other models. The significant improvement is shown in Tables 4-6, where the Connected-SegNets model has the TP value of 96%, 93%, and 92%, on the INbreast, CBIS-DDSM, and private datasets, respectively. Similarly, the TN value is of 88%, 87%, and 89%, on INbreast, CBIS-DDSM, and the private dataset, respectively. The results of the proposed model, Connected-SegNets, showed a significant segmentation improvement compared to the other models, with a maximum Dice score of 96.34% on the INbreast dataset, 92.86% on the CBIS-DDSM dataset, and 92.25% on the private dataset. Similarly, the Connected-SegNets model has achieved the highest IoU score of 91.21% on the INbreast dataset, 87.34% on the CBIS-DDSM dataset, and 83.71% on the private dataset. Overall, the proposed Connected-SegNets model has outperformed DS U-Net, AUNet, UNet, Connected-UNets, and SegNet in terms of Dice score and IoU score. This shows the power of the proposed model to learn complex features through the connections added between the two SegNets in the proposed Connected-SegNets, which take advantage of the decoded features as another input in the encoder pathway. Conclusions This research proposed a deep learning model, namely Connected-SegNets, for breast tumor segmentation from X-ray images. Two SegNets were used in the proposed model, both of which were fully connected via additional skip connections. The cross-entropy loss function of the original SegNet was replaced by the IoU loss function to make the proposed model more robust against sparse data. Additionally, the contrast limit adapt histogram equalization (CLAHE) was applied to enhance the compressed areas and smooth the pixel distribution. Moreover, two augmentation methods including rotation and flipping were used to increase the number of training samples and prevent overfitting. The experimental results showed that Connected-SegNets outperformed the existing models, with the highest Dice scores of 96.34%, 92.86%, and 92.25%, and the highest IoU scores of 91.21%, 87.34%, and 83.71% on the INbreast, CBIS-DDSM, and private datasets, respectively. Future work will focus on implementing new deep learning algorithms for tumor detection and classification for automatic breast cancer diagnosis.
6,294.4
2022-08-01T00:00:00.000
[ "Medicine", "Computer Science" ]
Direct Z-scheme GaN/WSe2 heterostructure for enhanced photocatalytic water splitting under visible spectrum van der Waals heterostructures are widely used in the field of photocatalysis due to the fact that their properties can be regulated via an external electric field, strain engineering, interface rotation, alloying, doping, etc. to promote the capacity of discrete photogenerated carriers. Herein, we fabricated an innovative heterostructure by piling monolayer GaN on isolated WSe2. Subsequently, a first principles calculation based on density functional theory was performed to verify the two-dimensional GaN/WSe2 heterostructure and explore its interface stability, electronic property, carrier mobility and photocatalytic performance. The results demonstrated that the GaN/WSe2 heterostructure has a direct Z-type band arrangement and possesses a bandgap of 1.66 eV. The built-in electric field is caused by the transfer of positive charge between the WSe2 layers to the GaN layer, directly leading to the segregation of photogenerated electron–hole pairs. The GaN/WSe2 heterostructure has high carrier mobility, which is conducive to the transmission of photogenerated carriers. Furthermore, the Gibbs free energy changes to a negative value and declines continuously during the water splitting reaction into oxygen without supplementary overpotential in a neural environment, satisfying the thermodynamic demands of water splitting. These findings verify the enhanced photocatalytic water splitting under visible light and can be used as the theoretical basis for the practical application of GaN/WSe2 heterostructures. Introduction Hydrogen energy is anticipated to be an alternative fuel to fossil fuels in the near future, [1][2][3] and in this case, photocatalytic water splitting is an attractive approach to produce hydrogen. 4 The traditional photocatalytic material TiO 2 has a wide band gap (BG) of 3.20 eV, which is only activated by ultraviolet light with a wavelength of less than 385 nm and its hydrogen production efficiency is extremely low. 5 Alternatively, two-dimensional (2D) transition metal dichalcogenides (TMDCs) such as WSe 2 , MoS 2 , and MoSe 2 have signicant application prospects in photocatalytic water splitting due to their excellent electronic properties, high carrier mobilities and visible-light response. In particular, it has been conrmed theoretically that monolayer WSe 2 has robust photoluminescence and high carrier mobility (705 cm 2 V −1 s −1 ), which are superior to that of MoS 2 and MoSe 2 . [6][7][8] Nevertheless, the photocatalytic performance of monolayer WSe 2 is still limited owing to its high transmittance and poor photogenerated electron-hole separation efficiency. Hence, the use of a co-catalyst to reduce the electron-hole recombination is strongly recommended. In recent years, some studies have proven that the construction of van der Waals heterostructures (vdWHs) is a valid approach to improve the photocatalytic efficiency of 2D materials. 9,10 vdWHs retain the electronic properties of individual layers and the interface effect of heterostructures endow them with some properties that are not present in their respective components, such as regulating the bandgap energy and improving the segregation efficiency of photogenic electron-hole pairs. 11,12 Traditional type II heterostructures have the advantages of response in an expanded spectrum range and promoted carrier separation, 13 but their redox ability is poor. Thus, the design and fabrication of novel direct Z-scheme photocatalysts are attracting increasing interest to improve the redox ability and transfer performance of photo-generated charges. [14][15][16] GaN monolayers are very promising for application in highperformance opto-electronic devices due to their semiconducting character with a suitable bandgap of about 2.3 eV, 17,18 which is narrower than that of bulk GaN. 19 Previous research disclosed that GaN shares an identical hexagonal conguration and related lattice constants with TMDCs, making them compatible. 20 Also, a GaN thin lm was prepared via chemical vapor deposition from SL-WSe 2 /c-sapphire to achieve GaN/WSe 2 heterostructures. 21 R. Meng et al. revealed that the existence of band offsets and intrinsic electric elds leads to reinforced photocatalytic activity between WSe 2 /GaN and WS 2 /GaN. 22 Shaoqian Yin et al. studied the effects of modifying the electric eld and strain on the optical and electronic characteristics of GaN/WSe 2 heterostructures with various stacking congurations. However, a systematic study has not been performed to date on the photocatalytic performance of GaN/WSe 2 heterostructures. 23 In this study, the structural stability, electronic properties, carrier mobilities, and photocatalytic performance of GaN/WSe 2 heterostructures were explored via rst-principles calculations. The calculations of the energy band gap presented that the GaN/ WSe 2 heterostructure is a representative direct Z-scheme with a built-in electric eld from GaN to WSe 2 . Meantime, the carrier mobilities of the GaN/WSe 2 heterostructure, which inuence the dissociation efficiency, was also amplied. In addition, the calculation of the Gibbs free energy of the GaN/WSe 2 system clarify the oxygen evolution reaction (OER) process. Consequently, it was inferred that GaN/WSe 2 heterostructures, which possess superior photocatalytic capacities under visible light, are favorable photocatalysts in the eld water splitting. Calculation methods and models The theoretical analyses were manipulated entirely through the Vienna ab initio simulation package (VASP) 24 within the projector augmented plane-wave (PAW) pseudopotentials using density functional theory (DFT). The Perdew-Burke-Ernzerhof (PBE) algorithm was adopted to conrm the exchangecorrelation functional. 25 Two-dispersion correction of DFT-TS 26 and DFT-D3 (ref. 27) was considered in the computation for clarifying the impacts of non-covalent forces. The valence electron schemes were as follows: 3d 10 4s 2 4p 1 for Ga, 2s 2 2p 3 for N, 5d 4 6s 2 for W, and 4s 2 4p 4 for Se. A Monkhorst Pack k-point grid of 7 × 7 × 1 and a cutoff energy of 480 eV were employed in the rst Brillouin zone. The maximum force was set as 0.01 eV Å −1 and the energy convergence threshold was 10 −5 eV. Interface stability To inspect the impact of the interfacial interactions between the GaN and WSe 2 nanosheets on the structural stability of GaN/ WSe 2 combinations, the compositional dependence of the total energy is discussed. Six representative parallel alignments of GaN/WSe 2 heterostructures are displayed in Fig. 1. To calculate the structural stability of the six stacked models quantitatively, the D3 and TS dispersion corrections were included to calculate the relative total energies of the six patterns (compared with the steadiest model) applying the PBE method, respectively, as shown in Fig. 2. The results show that the different calculation methods have analogous variation tendencies in the related total energies of the six congurations, demonstrating that the computational outcomes are dependable. Also, the gure shows that model V has the lowest relative energy among the models with the DFT-D3 algorithm, while model VI has the lowest relative energy compared with the others with DFT-TS algorithm. Hence, all sequent calculations were built on these two models. To further study the structural stability of the GaN/WSe 2 heterostructure, the lattice mismatch ratio and mismatch energy between the two monolayers were calculated. The lattice mismatch ratio is described as R mis =(a 2 − a 1 )/a 1 , where a 1 and a 2 represent the lattice constants of the GaN and WSe 2 monolayers, respectively. Table 1 lists the R mis of the GaN/WSe 2 heterostructures calculated by the DFT-D3 and DFT-TS methods, which are 3.00% and 3.92%, respectively, showing a perfect match. 31 Moreover, the lattice mismatch energies of the GaN/ WSe 2 heterostructures were determined using the following equation: where E (GaN)a and E (WSe 2 )a are the overall energies of isolated GaN and WSe 2 under the equilibrium lattice parameters of the GaN/WSe 2 heterostructures, respectively. E GaN and E WSe 2 indicate the whole energies of the single GaN and WSe 2 nanosheets before contact. S represents the interfacial area of the Table 1 Calculated lattice parameters (Å), lattice mismatch ratio (%), cohesive energies (meV Å −2 ), mismatch energies (meV Å −2 ), vdW binding energies (meV Å −2 ) and equilibrium interlayer distances (Å) of GaN/WSe 2 heterostructures using the dispersion-correction DFT-TS and DFT-D3 approaches after geometric relaxation Model Method GaN heterostructure. The mismatch energy results are exhibited in Table 1. DE mis computed by DFT-D3 and DFT-TS approaches is 1.84 meV Å −2 and 4.14 meV Å −2 , respectively, which is substantially below that of WS 2 /WSe 2 , 34 MoS 2 /WSe 2 , 33 and graphene/WSe 2 . 32 In the heterostructure, the lattice mismatch of the isolated GaN and WSe 2 caused by strain-driven interactions is almost negligible. To elucidate the adsorption interaction between the GaN and WSe 2 monolayers, the two above-mentioned methods were chosen to attain the interfacial cohesive energy, E coh , at different interlayer distances, d. E coh is expressed as follows: where E GaN/WSe 2 , E GaN , and E WSe 2 correspond to the whole energies of relaxed GaN/WSe 2 heterostructure, GaN monolayer and WSe 2 monolayer, separately. The E coh of the GaN/WSe 2 composites at the most stable interlayer spacing is −0.78 meV Å −2 and −2.55 meV Å −2 for DFT-D3 and DFT-TS, respectively, as displayed in Table 1. It is noticeable that the negative E coh values imply stable heterostructures. 35,36 Therefore, the cohesion between GaN and WSe 2 stabilizes the geometry. In addition, the E coh trends calculated using the two methods are very similar to the changes in the interfacial space, proving the dependability of the calculated results. Finally, the van der Waals energy was introduced to quantitatively describe the magnitude of the interlayer van der Waals force, which is dened as and its magnitude is determined by the lattice mismatch energy and the interface binding energy. The calculation formula is as follows: The calculated results are 2.61 eV and 6.69 eV, which are within the normal vdW binding energy range, 37,38 indicating the existence of slight van der Waals forces in the GaN/WSe 2 heterostructures. In the case of the GaN/WSe 2 heterostructure, the equilibrium interfacial space between GaN and WSe 2 is 3.09 Å with DET-D3 and 3.32 Å with DFT-TS, respectively, which is the canonical distance of vdW force. 39 Besides, given that R mis under DET-D3 is smaller than that with DET-TS, the subsequent calculations are based on model V of the GaN/WSe 2 heterostructure with the DET-D3 method. The results of model VI of the GaN/WSe 2 heterostructure with the DET-TS method are demonstrated in the ESI. † Electronic property The band arrangements of the GaN monolayer, WSe 2 monolayer and GaN/WSe 2 composites were investigated using the PBE functional module. The Fermi level of the three systems was set as the level zero energy, and the high symmetry sites G (0,0,0), M (0,0.5,0) and K (−0.333,0.667,0) in the Brillouin zone were used as observation routes to study the band alignment of the system with the range limited from −4 eV to 5 eV. Fig. 3(a) shows that monolayer GaN is a semiconductor with an indirect BG of 2.13 eV, where the valence band maximum (VBM) is settled at the K point, while the conduction band minimum (CBM) is settled at the G point. This is theoretically in accordance with the established research. 40 The VBM and CBM of the single-layer WSe 2 were both set near the K point, which denotes that WSe 2 is a semiconductor with a direct BG of 1.65 eV. The calculated results of WSe 2 are similar to that reported by R. S. Meng, 41 as demonstrated in Fig. 3(b). Fig. 3(c) presents the band diagram of the GaN/WSe 2 heterostructure. Both the VBM and CBM are located at the high symmetry point K, and thus it is a direct semiconductor with a BG of 1.47 eV. Compared with the indirect BG semiconductor, the direct bandgap semiconductor has a higher absorption coefficient for photo-generated electron-hole pairs and higher light utilization rate, and thus it is more suitable for photocatalysis. 42 The forbidden bandwidth of the GaN/WSe 2 heterostructure is 1.47 eV, which is slightly lower than that of the GaN and WSe 2 monolayers, and its energy levels are denser. This is conducive to the migration of electrons, thereby improving the photocatalytic activity. Also, the BG of the GaN/WSe 2 heterostructure is larger than that required for photocatalytic water splitting, which is 1.23 eV. In addition, comparing the three images in Fig. 3(a-c), it can be found that the energy band structure of the GaN/WSe 2 heterostructure is similar to the energy band alignments of the two monolayers, and almost retains that of GaN and WSe 2 to a large extent. Therefore, it can be speculated that the binding force between the heterostructure layers is weak van der Waals force, and the interaction force when the two monolayers are combined is small, which also corresponds with the above-mentioned result. The density of states (DOS) usually reects the distribution of electrons in each system. To further elucidate the electronic structure of the GaN/WSe 2 heterostructure, we also calculated the density of states of the GaN/WSe 2 heterostructure, GaN monolayer and WSe 2 monolayer. The density curve range was −2 eV to 3 eV for analysis, as illustrated in Fig. 3(e and f). The VBM of monolayer GaN consists mainly of Ga-4p and N-2p states, whereas the CBM consists of N-2p states, are displayed in Fig. 3(e). The VBM and CBM of monolayer WSe 2 consist primarily of W-5d and Se-4p states, as presented in Fig. 3(f). Apparently, the VBM is derived from GaN (orange shaded region in Fig. 3(e)), while the CBM of the GaN/WSe 2 heterostructure originates from WSe 2 (blue shaded region in Fig. 3(f)). The overlap may be caused by orbital hybridization, and the occurrence of orbital overlap will result in a reduction in the BG, which is more advantageous to improve the catalytic performance of the photocatalytic material. 43 It is well-known that the valence band offset (VBO) and the conduction band offset (CBO) are at the related sites of the VBM and the CBM of two sides of the interfacial space, respectively, which can considerably modify the charge transfer capacity of the heterostructures under illumination. 44 In addition, VBO, CBO, and BG can be dened as CBM , and E g = EWSe 2 CBM − EGaN VBM, respectively. Under the equiponderant interfacial condition, the correlation of VBO, CBO, the BG values is VBO (0.12 eV) < CBO (0.65 eV) < BG (1.47 eV). Accordingly, it can be reliably inferred that the GaN/WSe 2 heterostructure shows a type-II or Z-scheme band structure, which is consistent with the conclusion of band alignment, leading to the spatial segregation of the photogenerated carrier pairs. 45 The charge transfer phenomenon at the heterostructure interface can be illuminated by calculating the interfacial work function of the GaN monolayer, WSe 2 monolayer and GaN/WSe 2 heterostructures. 46 The calculation formula is as follows: where E vac and E F express the vacuum energy level and the Fermi energy level, respectively, and the vacuum energy level is taken as 4.5 eV. The GaN/WSe 2 heterostructure curve is the sum of the GaN monolayer and WSe 2 monolayer. The surface work functions of the GaN monolayer (Fig. 4), WSe 2 monolayer and GaN/WSe 2 heterostructure are 4.340 eV, 5.125 eV and 4.954 eV, respectively, and the work function of the heterostructure lies between that of isolated GaN and WSe 2 . and the electrons usually ow from the low side of the work function to the high side. Therefore, when the WSe 2 and GaN monolayers are in contact, electrons aggregate at the interface of the WSe 2 side to form a negative region, and the holes accumulate at the GaN side to form a positive region, thereby forming a built-in electric eld pointing from the GaN layer to the WSe 2 monolayer. Given that the Fermi levels of the two monolayers are different, the energy bands shi accordingly until the Fermi levels of the two monolayers reach equilibrium. The presence of a built-in electric eld can improve the mobility of carriers, thereby enhancing the dissociated efficiency of photo-induced electronhole pairs, reducing the recombination probability of carriers, and improving the photocatalytic properties of the heterostructure. The position of the band edge of a semiconductor is very important for evaluating its redox ability and photocatalytic performance. It is apparent that with the emergence of heterostructures, the E F of two pristine materials will achieve equilibrium. Consequently, the isolated GaN intimately contacts with WSe 2 , which will provoke a negative shi of 0.614 V in E F for GaN and induce a positive shi of 0.171 V for the WSe 2 nanoslab. Due to the migration of electrons from monolayer GaN to WSe 2 with abandoned holes in the GaN nanolayer, the edge potentials in the CB and VB of GaN are −0.73 V and 1.40 V at the GaN/WSe 2 heterostructure aer equalizing, singly. Meanwhile, that of the WSe 2 slab is 0.02 V and 1.67 V, respectively. It can be deduced that the VB edge potential of WSe 2 is 0.27 V lower than that of monolayer GaN; meantime, the CB edge of GaN is 0.75 V larger than that of the WSe 2 slab. As shown in Fig. 5, the photogenerated carriers obey two different routes, as follows: (1) electron transformation: on the one hand, the photogenerated electrons transfer from the VB to the CB under illumination. On the other hand, the built-in electric eld of the interface prevents electrons from migrating from the isolated GaN to WSe 2 . Meanwhile, the occurrence of CBO impedes electrons implanting in the CB of the monolayer WSe 2 , and the photoinduced holes in the VB of the single WSe 2 hinder their transitions to the VB of the GaN nanosheet. (2) Recombination of charges: in the CB of isolated WSe 2 , the photogenerated electrons will rapidly recombine with the photogenerated holes in the VB of the isolated GaN, by virtue of the close range of charge conveyance between the WSe 2 and GaN layer. Accordingly, photogenerated carriers can excellently detach aer the construction of the heterostructure. Finally, it was determined that the GaN/WSe 2 heterostructure is a representative direct Zscheme semiconductor, which is consistent with the consequences of band alignment and DOS. The type of band arrangement can make electron-hole pairs separate effectively and diminish the recombination of carriers, increase the lifetime of minority carriers, and simultaneously retain the redox ability of the internal carriers, which is further enhanced compared to the traditional type-II heterostructure photocatalytic properties. The two important potentials for water splitting are the O 2 / H 2 O electrode potential of 1.23 V, which can generate oxygen, and the standard hydrogen electrode H + /H 2 O potential of 0 V. Given that the GaN/WSe 2 heterostructure is a Z-type heterostructure, the potential at the CB position of the heterostructure is −1.06 eV and the VB position is 1.47 eV. Comparing the water splitting potential, it can be known that the energy at the CBM in the GaN/WSe 2 heterostructure is more negative than that of the standard hydrogen electrode (H + /H 2 O), and the VBM is more positive than that of the O 2 /H 2 O electrode. The BG of the GaN/WSe 2 heterostructure is larger than the potential required for water splitting, and thus it can be deduced that the GaN/ WSe 2 heterostructure can be applied for either the hydrogen evolution reaction (HER) or oxygen evolution reaction (OER), making it a good photocatalytic heterostructure. The formation of a heterostructure will induce interactions between the interfaces, leading to the transfer and redistribution of charges at the interface. This can be studied by calculating the differential charge density of the heterostructure. The plane differential density is dened as Dr, which can be calculated using the following formula: where r GaN/WSe 2 represents the charge density of the entire GaN/ WSe 2 heterostructure system, and r GaN and r WSe 2 represent the charge density of the GaN monolayer and WSe 2 monolayer, respectively. The plane charge differential density curve and three-dimensional charge differential density map in the Z direction of the GaN/WSe 2 heterostructure were obtained, as shown in Fig. 5. For the plane charge differential density curve in Fig. 5, the blue area represents the depletion of charges and the yellow area represents the charge accumulation. Electrons accumulate in the WSe 2 slab and are consumed in the GaN slab, elucidating that electrons migrate from the GaN to WSe 2 nanosheet. To quantitatively articulate the charge density, the Bader charge analysis under equilibrium was executed. The negative charges of 0.016 e per atom shi from the GaN nanosheet to the WSe 2 aer they contact each other. It is generally acknowledged that the variation in the populations is consistent with the conclusions of the charge density difference. Simultaneously, in Fig. 5, it indicates that for the benet of implementing stability, segments of electrons in GaN relocate to WSe 2 in the interface, which causes a positive charge region in the surface of GaN and negative charge in that of WSe 2 . It can be conrmed that the GaN/WSe 2 heterostructures can take advantage of the built-in electric eld pointing from GaN to WSe 2 to efficiently separate the photogenerated electrons and holes (Fig. 6). Carrier mobility Generally, the carrier mobility originates from the deformation potential, which is a pivotal factor to estimate photocatalytic properties and should be calculated systematically. 47 The methods for the calculation of the carrier mobility are elaborated in the ESI. † By utilizing compressive and tensile strains, the in-plane C 2D and E 1 i of the isolated GaN, WSe 2 and GaN/ WSe 2 were assessed by tting the data into parabolic and linear curves, which are exhibited in Fig. 7, S5 and S6 in the ESI, † respectively. Also, Table 2 displays the acquired m*, C 2D , E 1 i and m 2D . The monolayer GaN and WSe 2 and GaN/WSe 2 heterostructure exhibit different C zig 2D and C arm 2D in the zigzag and armchair directions, which determines that their physical stress responses are totally anisotropic. In addition, the carrier mobilities of the WSe 2 monolayer is lower than that of nanosheet GaN because of its smaller elastic modulus and higher deformation potential. These conclusions are in accordance with previous theoretical and experimental results. [48][49][50][51] Alternatively, the limitations of the deferent rates of photogenerated carriers are mainly owing to the low electron mobilities of the GaN slab and low carrier mobilities of the WSe 2 slab, essentially increasing the recombination of photogenerated electron-hole pairs. Monolayer GaN and WSe 2 detrimentally obstruct the photocatalytic activities, and thus is signicant to fabricate GaN/WSe 2 heterostructures. Consequently, the electron and hole mobilities of the GaN/WSe 2 heterostructure are 4149.37 cm 2 V −1 s −1 and 2101.71 cm 2 V −1 s −1 along the zigzag direction, and 4328.33 cm 2 V −1 s −1 and 2395.94 cm 2 V −1 s −1 along the armchair direction, respectively, indicating that the electrons in the GaN/WSe 2 heterostructure, which have a tendency of spreading and moving along both the zigzag and armchair directions, are superior to holes. Additionally, water oxidation reactions take place in the WSe 2 nanosheet through photogenerated holes, the water reduction reactions emerge in the GaN monolayer, as can be seen in Fig. 5. In comparison to the single layers, the electron mobilities of the GaN/WSe 2 heterostructure in the zigzag pathway achieve a remarkable enhancement, which are 19 times and 21 times that of the isolated GaN and WSe 2 , respectively. Meanwhile, 11 times and 67 times versus monolayer GaN and WSe 2 are achieved along the armchair pathway. Correspondingly, the hole mobilities of the GaN/WSe 2 heterostructure in the zigzag direction and armchair direction are improved by 4 times and 19 times compared with that of nano-slab WSe 2 , respectively. In short, GaN/WSe 2 heterostructures with noticeable carrier mobilities exhibit tremendous potential for application in the photocatalytic eld. Photocatalytic performance To additionally validate if water splitting reactions will initiate spontaneously, the thermodynamic practicability of applying the GaN/WSe 2 heterostructures as photocatalysts was explored. The complete water splitting mechanism of the GaN/WSe 2 heterostructure under light irradiation was divided into the HER and oxygen OER. Herein, the Gibbs free energy (DG) of the water splitting reaction were obtained from the report by Nørskov et al. 52 The method for the calculations of DG is elaborated in the ESI. † Acid (pH = 0) and neutral (pH = 7) conditions were both considered. Universally, the HER tends to occur under a suitable potential than the OER, and thus investigating the thermodynamic driving force for the OER is sufficient. When the OER reaction happens, the structures of the most stable OH*, O*, and OOH* intermediates adsorbed at the WSe 2 interface during the standard four-electron transport reaction paths and the relative variation in DG are elucidated in Fig. 8. The computed DG of OH*, O*, and OOH* in the dark (U h = 0 V) when pH = 0 is 1.74 eV, 2.81 eV and 4.87 eV, respectively. Therefore, the emergence of the OOH* intermediate is the ratedetermining process, and thus the OER process can proceed spontaneously when providing a 1.47 eV external potential afforded by photogenerated holes. It is apparent that the computed DG is enhanced in the initial and third steps, but decreases in the second and fourth reaction when pH = 0 and U h = 1.23 V. Interestingly, when the external potential realizes 1.67 V with pH = 0, the various trends of DG are the same as the conditions at pH = 0 and U h = 1.23 V. The results indicate that the OER step of water splitting will not spontaneously originate at pH = 0. Until the external potential reaches 1.88 V at pH = 7, the DG of the ultimate processes (DG A , DG B , DG C , and DG D ) shis to negative values, as exhibited in Fig. 8(c) (presented by the green line), which ascertains that the OER process will spontaneously occur at pH = 7 with light illumination. In conclusion, the GaN/ WSe 2 heterostructures can initiate water decomposition without a thermodynamic driving force under irradiation in a neutral environment. Conclusions In summary, we comprehensively investigated the electronic, optical and photocatalytic properties of the GaN/WSe 2 heterostructure. Model V manifested a benecial Z-scheme band arrangement with a built-in electric eld pointing from GaN to WSe 2 , while model VI displayed a detrimental type I band alignment. In the case of model V, the electron mobilities of the GaN/WSe 2 heterostructure are signicantly enhanced compared to that of monolayer GaN and WSe 2 , and the hole mobilities enlarge substantially compared to that of isolated WSe 2 . In addition, the CB edge of GaN and VB edge of the WSe 2 monolayer can yield robust reduction and oxidation reaction, respectively. Then, we explored the photocatalytic OER on the heterostructure and found that the reaction spontaneously proceeded in a neutral environment. These ndings provide a theoretical basis for the practical application of 2D GaN/WSe 2 heterostructures and provide insight for subsequent research on van der Waals heterostructures. Conflicts of interest The authors declare no competing nancial interest.
6,343.4
2023-06-29T00:00:00.000
[ "Materials Science" ]
Rats do not eat alone in public: Food-deprived rats socialize rather than competing for baits Limited resources result in competition among social animals. Nevertheless, social animals also have innate preferences for cooperative behavior. In the present study, 12 dyads of food-deprived rats were tested in four successive trials, and then re-tested as eight triads of food-deprived rats that were unfamiliar to each other. We found that the food-deprived dyads or triads of rats did not compete for the food available to them at regular spatially-marked locations that they had previously learnt. Rather, these rats traveled together to collect the baits. One rat, or two rats in some triads, lead (ran ahead) to collect most of the baits, but "leaders" differed across trials so that, on average, each rat ultimately collected similar amounts of baits. Regardless of which rat collected the baits, the rats traveled together with no substantial difference among them in terms of their total activity. We suggest that rats, which are a social species that has been found to display reciprocity, have evolved to travel and forage together and to share limited resources. Consequently, they displayed a sort of 'peace economy' that on average resulted in equal access to the baits across trials. For social animals, this type of dynamics is more relaxed, tolerant, and effective in the management of conflicts. Rather than competing for the limited available food, the food-deprived rats socialized and coexisted peacefully. Introduction "Let there be bread for all"-Nelson Mandela Competing for limited resources is a major driving force in the animal kingdom. In the context of social species, there is an apparent conflict between competing over resources on the one hand, and preserving group cohesion on the other hand. Indeed, living in groups has both benefits and costs, and a prerequisite for social species is to establish a balance between cooperation and competition among individuals. Group living usually involves the establishment of various social ranks and, accordingly, the distribution of resources is biased toward the highlyranked individuals. Even then, however, conflicts among group members arise, with the two main conflicts being over mating partners [1] and food [2]. In the context of the latter, reducing competition over food resources is crucial for survival in social species. For example, a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 reducing resource competition is vital for colonial seabirds in order to ensure self-and chickprovisioning during the breeding season [3]. Rats (Rattus sp.), including laboratory rats, are social animals in which dominance and subordination traits have been described [4][5][6][7]. Nevertheless, rats were shown to display various types of reciprocity [8][9][10][11][12] and even empathy [13][14][15][16]. Indeed, the notion of competing over limited access to food resources as a tool to assess dominance was criticized, and it was suggested that winning access to the limited resources does not necessarily represent dominance or high social rank, but merely reflects a better performance of some individuals over others [17][18], winners, however, are more likely to win subsequent conflicts [19]). As noted above, recent studies have revealed that rats with limited access to food may display a type of prosocial behavior rather than provoking competition [8, 13-14, 16, 20], and even help unfamiliar rats (generalized reciprocity [9][10]; for a theoretical treatment of this mechanism see [21]). Those studies demonstrated that social factors may dominate the desire for preferred food in rats. Furthermore, it was argued that rats perform prosocial behavior toward both familiar and unfamiliar rats, and that such performance is repeated consistently and intentionally day after day and at shorter and shorter latencies [15]. It was also suggested that social animals have evolved strong innate preferences for cooperative behavior [22], and that a specific behavior is not a mere product of the proximal immediate cost and benefit, but it also has an ancestral component, balancing the gain in an ultimate evolutionary success in addition to the immediate gain [23]. In other words, prosocial behavior in rats is a reflection of a desire for social contact [24][25] (see however [16]). A past study with rat dyads revealed that the spatial choices of individual rats may affect the future spatial choices of their partners in a foraging task [6]. In another study, in which dyads of food-deprived cage-mate rats were conflicted between competing for limited resources and retaining social contact with their partner, it was demonstrated that the rats unequivocally favored remaining with their partner rather than splitting up to forage independently [26]. Interestingly, while one of the rats ran ahead and ate most of the pieces of the food, the other rat systematically traveled with the leader rather than splitting up from the dyad and foraging independently ("spatial segregation"; [3]). Altogether, past studies have revealed that rats first and foremost favored to travel together even when they were expected to compete over limited resources and, accordingly, it is unlikely that their behavior reflects competition. In the present study we expanded the previous studies [8, 13-14, 16, 20] by taking 12 dyads of cage-mate food-deprived rats and testing them in four successive trials in which one rat in each dyad collected more baits. The same rats were then randomly divided into eight triads, each with three food-deprived rats that were unfamiliar to each other, and which had again to compete for limited equispaced pieces of food. Accordingly, we posed two questions: (i) would the "leaders" in the first dyad trial preserve their "leadership" in subsequent dyad trials and the triad trial; and (ii) would the triad of food-deprived unfamiliar rats split up and compete for the food, or travel and forage together as they had done in a previous study [26]? Answering the first question was expected to reveal whether "leadership" in foraging rats is a personal trait or merely a transient better performance; while answering the second question might uncover whether socializing or competing depends on familiarity with the other individuals. Animals Twenty-four male Sprague-Dawley rats (age 6-7 months; weight 450-600 g) were housed in a temperature-controlled room (22 ± 1˚C) under an inverse 12/12-h light/dark cycle (dark phase 8:00-20:00). Rats were held in standard rodent cages (40 x 25 x 20 cm; two rats per cage) with sawdust bedding and were provided with ad-libitum access to water and standard rodent chow. Under these housing conditions rats were exposed to the odors and sounds of all other rats, but had visual and tactile contact only with their cagemate. For each cage, rats were marked with a waterproof marker on their tail, one rat with a single stripe and the other with a double stripe. Before testing they underwent daily handling for two weeks. Ethics note. We confirm that this study was carried out in strict accordance with the recommendations of the Guide for the Care and Use of the Institutional Animal Care and Use Committee (IACUC) of Tel-Aviv University, Permit Number L-14-051. In this permit, Tel-Aviv IACUC approved the specific procedures in this study. No animals were sacrificed for the purpose of this study. Apparatus Rats were tested in a 6 x 5.6 m arena, comprising the white floor of a light-proofed air-conditioned room (22 ± 1˚C). The room door had the same cover of that of the walls, and was located 50 cm above the floor so that there was no distinct visual or tactile landmark on the room perimeter. The room was illuminated with four cool-white LED projectors (65W each), sufficient to distinguish between subjects but subtle enough to prevent discomfort to the rats. Sixteen objects (each a 12 x 12 x 6 cm cement cube) were placed in a grid layout, equispaced at 90 cm from each other in the center of the arena (see Fig 1). Trials were recorded by four equispaced Mintron MTV-73S85H color CCTV cameras, placed 2.5 m above the arena, each providing a top view of one of the arena quarters. The four video images were integrated and tracked as one image by a tracking system (Ethovision XT 10; Noldus Information Technologies, NL) at a rate of five frames per second. Procedure Training and testing were carried out during the dark phase of the rats' dark/light cycle, in order to test the rats during the period when they are most active. Each rat underwent a series of training sessions preceded by 12 hrs of food deprivation with access only to water. Fifteen minutes before each session, rats were brought to a room adjacent to the apparatus and their backs were gently painted in blue, green, or red with a waterproof marker, enabling the tracking system to differentiate among them. Each of the 16 objects was then baited with a small piece of chocolate-flavored cereal, placed in the center of the top surface of each object. An individual rat was then placed gently in the near right corner of the arena, and the experimenter left the room. Dyads and triads of rats were each hand-held by one or two experimenters and gently released simultaneously near the right corner of the arena, with all of them facing the arena center. Training sessions continued until each rat had collected food from at least 14 objects in less than 20 min. Each rat underwent a different number of training sessions depending on its learning rate (mean ± SEM = 3.60 ± 0.15 training sessions). When three rats from different cages had completed the training sessions, they underwent two additional sets of 15-min trials: (i) 'dyad trials', in which cage-mates were tested together four times in the course of two weeks, (two trials/week); (ii) a 'triad trial', in which three unfamiliar rats (from different dyads) were tested together. Before the dyad set of trials and before the triad trial, each rat also underwent a 15 min 'lone trial', which was used as a reference for its behavior in the social trials. In this procedure, each rat had learned the location of baits in the test arena before being tested in it with one familiar or two unfamiliar partners. At the end of each trial the rats were returned to their original cages and the arena was mopped with soap and water in order to neutralize odors prior to the next session. Data acquisition and analysis Data acquisition was performed automatically for all rats, and the experimenter was blind to the role of the rat in previous trials. For the lone and dyad/triad trials the following parameters were extracted from 'Ethovision', and further analyzed with Microsoft Excel 2010 and STA-TISTICA 8 (Statsoft, UK): Cumulative distance: The cumulative metric distance traveled during the 15 min trial. Task duration: Time elapsed between the first arrival at the first object and the first arrival at the 16th object. Distance traveled during the task: The cumulative metric distance traveled by a rat during task duration, which was usually shorter than the 15 min trial duration. Distance traveled along the walls: The cumulative distance traveled within a 40 cm zone along the four arena walls (away from the object zones). Latency to arrival at the first object: The time elapsed between the beginning of the trial and the first arrival at any one of the objects. Total visits to the objects (repetitions included): The cumulative number of visits to objects in the course of the entire trial (arrival at the object zone was considered as a visit). Number of objects visited (repetitions excluded): The cumulative number of visits to different objects in the course of the entire trial (the possible range is from 0 to 16 objects). Number of collected baits: Scrutiny of the video files revealed that the first rat to arrive at a baited object collected the bait. Only in three cases did a rat that was first to arrive at a baited object not collect the bait. Therefore, for each rat, the number of first arrivals at baited objects was considered as the number of collected baits. Visit duration: The time (sec) lapsed from arrival at a zone until leaving that zone. Duration between objects: The average duration between first visit to one object and first visit to the next, previously unvisited, object. Leader and Follower: We used these terms literally to describe which rat ran ahead of the other (s) to collect more baits ("leader"), and which rat was trailing behind the "leaders" in collecting baits. As noted in the 'Discussion', leadership in one domain (collecting baits) rarely predicts leadership in other domains. Statistics One way ANOVA was used to compare the behavior of the same rats across trials. Two-way ANOVA was used to compare the behavior of leader and follower rats (between-group effect) in the lone and triad trials (within-group effect). For this, rats in the lone trials which preceded the first dyad trial were classified as "leaders" and "followers" according to their behavior with partners in the first dyad trial. A comparison of group and individual performance Rats in the dyad/triad trials could have divided the task among them, with each consuming a different set of baits, enabling them together to accomplish the task of consuming the 16 baits faster than the lone-trial rats. For example, each of the three rats in a triad could have collected baits from 5-6 objects and the triad could thereby complete together the task of collecting all the baits faster than lone individuals. However, this was not the case and the duration of consuming the 16 baits by two or three rats together (regardless of which rat collected them) did not differ from the performance of the same individuals in the lone preceding trials (Fig 2). Indeed, a one-way ANOVA comparison of task duration in the first dyad trial and the triad trial revealed no significant difference (F 3,64 = 0.98; p = 0.4091). Task duration was thus shown Duration for lone rats refers to the task duration when rats were tested as individuals in the lone trial. Duration for triads refers to the arrival at all 16 objects by any of the rats, implying that each rat could hypothetically visit only some of the objects, as also applicable to the dyad data. As shown, there was no significant difference in task duration between lone rats and triads/dyads. not to differ whether one rat, a dyad of rats, or a triad of rats had to collect the 16 baits. Implicit in these results is that rats in dyads and triads did not split up to collect the baits independently, but traveled together, as illustrated in Videoclip 1 in which the rats are observed to be more preoccupied in socializing than in collecting the baits. Leading and following as episodic states that change over trials Table 1 presents the number of baits collected by each individual rat across the four dyad trials and the subsequent triad trial. As shown, there were two rats that collected more than half of the baits in all trials (top two rows in Table 1). Another three rats consistently collected just a few baits (bottom three rows). The other 19 rats displayed substantial changes from one trial to the next in terms of collecting baits. Thus, leading or following in bait collection were only episodic transient phases for 19 out of the 24 rats, for which either leading ore following was preserved within a specific trial and changed in subsequent trials. Further, each rat was categorized as a "leader" or a "follower" according to the number of baits it had collected during the first dyad trial, with leaders collecting more than 8 baits and followers less than 8 baits. Retaining this assignment into "leaders" and "followers", we then calculated how many baits were collected by each of the 'first-trial leaders' and 'first-trial followers' in the subsequent trials. The means (± SEM) for these data, depicted in Fig 3, show that Table 1. The number of baits consumed by each of the 24 rats (rows) during the four dyad trials and the triad trial. As shown, two rats (top two rows) were continuously leading in terms of the number of baits they collected. Another three rats (bottom three rows) were followers, always collecting a few baits. The other 19 rats greatly varied in the number of baits they collected across trials. the difference between these groups, which was apparent in the first trial, diminished over successive trials and leveled off from the third dyad trial onwards. Indeed, a two-way ANOVA with repeated measures revealed a significant difference between leaders and followers (F 1,88 = 6.57; p = 0.0276), a significant difference between trials (F 4,88 = 2.79; p = 0.0312), and a significant interaction of trial X leader/follower states (F 4,88 = 5.58; p = 0.0005). A Tukey HSD posthoc test revealed that the number of baits collected by leaders in the first and second trials significantly differed from the number of baits collected by followers in these trials, as well as from the number of baits collected by either leaders of followers in the triad trial (Fig 3). Notably, as shown in Table 1, leading and following states were exchanged between most of the rats, so that a running-ahead rat in one trial typically became a follower in the next trial, and vice versa. The episodic states of forerunning and following were thus preserved within a trial but exchanged and leveled off across trials. Furthermore, we compared the changes between each two successive trials with 3,000 randomly-generated changes within the same range. A two-way ANOVA with repeated measures revealed a significant difference between the randomly-generated changes and the actual changes shown in Table 1 and Fig 3 (F 1,9066 = 7.24; p = 0.0072), a significant difference between the changes between successive trials (F 3,9066 = 29.08; p < 0.0001), and a significant interaction (F 3,9066 = 3.88; p = 0.0088). Nevertheless, a HSD post-hoc test for unequal n revealed that the only change that significantly deviated from the random data was between the fourth dyad trial and the triad trial, since the average number of baits collected by a rat dropped since there were now three rather than two rats sharing the 16 baits. Nevertheless, the lack of significant difference between randomness and the changes among trials 1-4 attests for the inconsistency of the leader and follower states over repeated trial. The question arose as to whether it was possible to identify "leaders" and "followers" already in the preceding lone trial, when they were tested individually before being exposed to the baits together with partners. For this, data of the rats during the triad trial were divided into leaders and followers in accordance with their performance in the triad trial, and compared (left hand columns). These first-trial categories as leaders and followers were retained for the subsequent trials, regardless of the actual number of baits collected by the rats in these subsequent trials. The mean of the actual performance in subsequent trials is thus depicted according to the original categories, illustrating a diminishing difference between leaders and followers, reaching equity from the third trial on. * indicates a significant difference between the leaders and followers in that trial, as well as between the leaders in this trial and both leaders and followers in the triad trial. Note that this does not mean that there were no leaders and followers from Trial #3 on. Conversely, there were always leaders and followers but their identity changed. for the lone and triad trials ( Table 2). A repeated-measure two-way ANOVA was performed and followed by a Tukey post-hoc test. As shown, there was no significant interaction. Notably, there were a few within-trial significant differences (comparing leaders and followers). These parameters are depicted in the shaded rows of Table 2. Specifically, there were significant differences between leaders and followers in four parameters: leaders traveled a greater distance during the task of visiting the 16 objects, had a shorter latency to visit the first object, visited more different objects (repetitions excluded), and overall paid more visits to all objects (repetitions included). None of these differences, however, were reflected in the individual rats in the lone trial, implying that the "leadership" of these rats was manifested only in the presence of conspecifics. Similarly, the behavior of leaders also demonstrated a significant differences (in four out of the eight parameters in Table 2: 'Cumulative distance', 'Distance traveled during the task', 'Visit duration' and 'Distance traveled along the perimeter'), whereas the behavior of followers differed in only one parameter ('Cumulative distance') between the triad and lone trials. Overall, there was no significant interaction in the eight parameters depicted in Table 2, indicating that the trend of change in each parameter was similar in leaders and followers, but that these changes were more salient in leaders compared with followers. Dyad trials Followers trailed the leaders; in 53% of the arrivals of a leader to a baited object, a follower rat arrived at the same object within 15 seconds, and in another 21% of arrivals the follower rat arrived at the same object in less than one minute. In 43% of arrivals of the leader to a baited object, the third rat also arrived at the same object (see for example Vidoeclip I and Fig 4). Notably, there was no difference in the total activity of leaders and followers (top row in Table 2). In other words, all the rats, leaders and followers, were similarly active (and as shown in Videoclip 1 and Fig 4 also traveled together), with the leaders collecting more baits and paying overall more visits to the various objects (whether baited or not). The tendency of the rats to arrive together with partner(s) at the same objects is illustrated for two triads in Fig 4. As shown, the leadership of one or two rats in the triad is conspicuous. Table 2. Mean (± SEM) data on eight activity parameters for rats in the lone and the subsequent triad trial. In each trial, rats were classified as leaders or followers according to their performance in the triad trial. For each parameter, the results of a two-way ANOVA are depicted at the right for within-trial comparison (between leaders and followers), for between trial comparison (between lone and triad trials), and for the interaction of trial x leading. Significance is depicted in boldface. The results of a post-hoc Tukey HSD comparison are depicted in superscript, as specified at the bottom of the Rats tended to travel with partner(s) to the objects In most trips to the objects, leader and follower rats tended to travel with one or two partners (75% and 76% of all trips, respectively). Moreover, in each of the shared trips, leaders and followers visited together most of the objects visited in that trip (74% and 73%, respectively). Altogether, the rats took most of the trips to objects together, mostly visiting the same objects in each of these shared trips. Similarly, the baits were usually collected during shared trips (see videoclip 1). Only 26% and 37% of the baits were collected, respectively, by leading and following rats during trips without partners, when only one of the rats was in the object zone and the other rat (or the other two other rats in the triad trial) were in the perimeter zone. These data demonstrate the tendency of the rats to travel together and visit the same objects in the same trips, regardless of being a leader or a follower. In other words, the rats that collected more baits during the trial ("leaders") did not acquire their status by traveling alone, but mainly by traveling ahead of their partners in shared trips. Fig 4. The tendency of the rats to travel together with one or two partners to the same objects is illustrated for a triad with one leader (top) and a triad with two leaders (bottom). The 16 objects are ranked on the abscissa according to the order in which each was visited, and the time of visiting each of the objects by each of the rats is given along the ordinate. Accordingly, for each object the order of symbols from bottom to top (time) reflects the order in which the rats arrived at that object. In the triad with one leader, the red rat () arrived first at objects 1-3, and then again at objects 7-15. The green rat (□) arrived first at objects 4,5,6, and the blue rat (4) arrived first only at object 16. Notably, the red and green rats arrived at most objects almost at the same time while the blue rat was out of the race most of the time. In the triad with two leaders (bottom), the green rat (□) arrived first at objects 1-7, and then the blue rat (4) took over and arrived first at objects 8-16, while the red rat () always lagged behind. As shown, the three rats traveled together to the first eight objects, arriving at them almost at the same time; then the green and the blue rats continued to travel together to objects 9-12 while the red rat split up from the triad; and, ultimately, the blue and the green rats also split up, with the blue rat arriving a few minutes before the green rat at objects 13-16. The data for triads with two leaders indicate alternating "leadership" with none having consistent precedence as in triads with one leader. Altogether, the overlap and adjacency of the symbols of the different rats (arrival time) illustrate their arrival at the same object at the same time. doi:10.1371/journal.pone.0173302.g004 In triads in which two of the rats were first to arrive at about the same number of objects, these leading rats shared, on average, about 79% ± 5% of the objects in each trip. That is, they mostly traveled together, temporally exchanging leadership between them, with being first at some point and eventually collecting the same number of baits (Table 1 and Fig 4). Leading and following were thus episodic states in which rats traveled mostly together, traversed about the same overall distance, and underwent similar changes to their behavior compared with their behavior in the lone trial. Nevertheless, these changes were more salient in leaders, conferring upon them their episodic "leader" status in being first to access the baited objects during a specific trial. Altogether, rats with partners interacted with each other both before and in-between approaching the objects, whereas the same rats in the lone trials approached the objects immediately or after progressing along the arena wall (see Videoclip 1). Videoclip 1. Discussion In the present study, 24 food-deprived rats were first trained individually to collect baits placed on each of 16 equispaced objects. Having learned to collect the baits, they were then tested with the 16 baited objects first as cage-mate dyads over four trials, and afterwards as triads of three rats that were unfamiliar to one another. We found that when tested in dyads or triads, the rats did not split up to collect the baits independently, but mostly traveled together to the various objects, with either one, or two of them in some triads, leading and arriving first at the majority of objects and collecting the baits. Nevertheless, regardless of which arrived first, the rats mostly traveled together (Videoclip 1) with no substantial difference among them in terms of their total activity. It would seem that rats in dyads and triads focus more on socializing, tending to travel to the objects with partner(s). In consequence, the time taken to collect all 16 baits was approximately the same for lone rats, dyads, or triads of rats. In terms of collecting baits, leading and following states in individual rats were exchanged over repeated trials. In other words, leading and following in collecting the baits were transient states that were usually preserved within a trial [specific session of testing a dyad] but changed across trials (subsequent testing of the same dyad). In the following discussion we interpret the puzzling preference of the food-deprived rats to travel together rather than splitting up and collecting the baits independently. We suggest that the change in "leadership" over trials, as observed in the present study, may reflect a sort of 'peace economy' in which all individuals equally benefit from the available resources over trials. The present results offer a follow-up to our previous studies, which showed that rats prefer to travel together [24] and that food-deprived rats favor socializing over competing for food [26]. As in the previous studies, here too it seems puzzling as to why the food-deprived rats in triads foraged together rather than competing for the food available to them at the regular spatially-marked locations that they had previously learned. It could be argued that the rats were not hungry, but this is unlikely since for such a relatively small mammal, 12-hr of food deprivation is not trivial. Another possibility is that the rats traveled together while competing for the baits, with some leading in one trial and others leading in subsequent trials, demonstrating a sort of "episodic personality" [27]. This, however, is less likely since the rats continued to travel together to the objects even after they had consumed all the baits and there was no apparent benefit for rats except that of social traveling. This indicates that there is a strong social affiliation among rats, and in the present experiment it overweighs hunger, which is a basic and strong drive in animal behavior, and the past knowledge on the location of palatable food. Altogether, foraging and collecting the baits was definitely not the rats' only motivation in traveling through the arena. Since over trials however, all get similar amount of baits, and since this was gained without apparent competition, we termed it "peace economy". The present results do not provide substantial support for leadership and followership in rats. A social hierarchy with dominant and subordinate individuals characterizes rats, including laboratory rats [5,28]. It was presumed that competition over limited access to food presents a measure of competitive dominance [29]. This notion was examined in further studies, revealing that in the case of limited resources, foraging performance does not necessarily reflect dominance/subordination ("the fallacy of limited access to food and dominance" [17][18]). Even the use of the terms 'leader' and 'follower' was criticized and replaced with 'high-performing rats' and 'low-performing rats' [30][31]. A recent survey on leadership in mammals has highlighted several dimensions of leadership [32], characterizing a spectrum of various types and intensities of leadership. Applying these criteria to the present study, it would seem that in the context of a specific trial, some rats may be considered as achievement-based moderate leaders that coordinate the behavior of the others (followers), with the payoff (baits) skewed to these "leaders". Leadership in one domain (collecting baits), however, rarely predicts leadership in other domains (at least in the other behavioral parameters that were measured in the lone and triad trials). Indeed, of the various activity parameters that were measured in the present study, none could predict which rat would be a leader or a follower. Therefore, even if collecting more baits in a specific trial could be considered as representing a specific type of leadership or an aspect of dominance, this better performance was only a transient state, with the rats changing between or among them this limited type of leadership and followership over trials. When a rat dyad or triad was traveling, one or two of the rats followed and collected only a few baits, and sometimes not even one, yet the rats kept traveling together. A possible explanation for this could be that of the model of spontaneous emergence of leadership in foraging pairs [33]. According to this model, the rat with lowest reserves determines when the group should forage, while the other rats that follow are likely to benefit from the safety of a joint activity, along with the possibility that they too may forage. Accordingly, group coordination emerges spontaneously by means of temporary 'leaders' and 'followers', due to individual differences in the energetic states, with a simple rule of thumb: "I forage if either my reserves have fallen below a certain threshold value, or my partner chooses to forage" [33]. Moreover, this tactic for collective travel requires only the ability to observe and react to a change in the partner's behavior-a type of self-organized behavior [34] that emerges spontaneously. Restricting the term "leadership" to literally moving ahead thus legitimates the simple rule of "follow the individual that moves first", and automatically produces leaders and followers [35][36]. This model may also explain why leadership and followership states were stable within a specific trial, but changed across trials, so that typically, a leader in one trial could become a follower in the subsequent trial, and vice versa. In the context of the latter model, it could be argued that the present results are due to scrambled competition in which one individual is faster in depleting a limited resource, but another individual then takes over since, for example, the first individual is now engaged in digestion (see appendix in [37]). This is not likely, however, since the rats could recover from hunger after each trial with ad-libitum access to food, and were again food-deprived only a few days later, before the subsequent trial. Rats tend toward social affiliation, as revealed in early reports [4,7], and in the present task they were also motivated to obtain food. It could be argued that when a rat traveled towards a baited location, it was sometimes followed by another rat that was motivated by the social affiliation tendency. Nevertheless, the changing states among or between the rats across trials in most rats may reflect prosocial behavior or a type of reciprocity, as recently revealed in both wild and laboratory rats [8-12, 20, 38]. Specifically, it was suggested previously that interactions in which followers voluntarily follow the leaders reflect leader-follower relations that are more reciprocal and mutually beneficial [35]. Moreover, it was also argued that the change in leading-following states over trials is in line with the mutually beneficial exchange between a cooperator and a reciprocating partner [39], which is a form of direct reciprocity-a basic form of reciprocal cooperation [ [35]. Indeed, recent laboratory studies demonstrated that wild rats display a generalized reciprocity that does not depend on the identity of the recipient [8], a direct reciprocity in which the partners are familiar [9,11,14], and that they help the more hungry recipients or those in poor condition [10]. Other studies have demonstrated reciprocity in laboratory rats, suggesting that the mechanism for this reciprocity is a display of foodseeking behavior [40], and that prosocial behavior in rats occurs even in the absence of strategic, reciprocal, or selfish motivations [38]. It was also suggested that the origin of this behavior is more likely to be genetic than cultural [41]. Notably, the present results show a lack of competition, which could be a result of a sort of reciprocity although the latter is not explicit in the present findings. Nevertheless, we suggest that the past studies on reciprocity [8-12, 20, 38], together with the present results, may be viewed as a sort of "peace economy" by the rats, aimed at increasing the welfare of the individuals. The present results, however, demonstrate only that rats preferred to forage together and that they displayed balanced food consumption over trials. Notably, for small mammals such as rats, the primary pay-off in within-group cooperation may be the anti-predator benefit offered by 'safety in numbers' [42]. Rats may therefore cooperate even when they only benefit sometimes, since by working together they are less likely to become victim to predation. Accordingly, laboratory rats, which notably display reduced aggressiveness compared to their wild conspecifics, did not compete for the food in the present study despite being food-deprived. Rather, they displayed a sort of 'peace economy' that on average resulted in changing "leadership" among or between them and equal access to the baits across trials (Table 1 and Fig 3). Indeed, it was suggested that this type of social dynamics is more relaxed, tolerant, and effective in the management of conflicts. It is achieved through a process in which individuals continually modify social relationships in order to attain a peaceful coexistence [43].
8,411.2
2017-03-09T00:00:00.000
[ "Biology", "Psychology" ]
Effect of culture medium, host strain and oxygen transfer on recombinant Fab antibody fragment yield and leakage to medium in shaken E. coli cultures Background Fab antibody fragments in E. coli are usually directed to the oxidizing periplasmic space for correct folding. From periplasm Fab fragments may further leak into extracellular medium. Information on the cultivation parameters affecting this leakage is scarce, and the unpredictable nature of Fab leakage is problematic regarding consistent product recovery. To elucidate the effects of cultivation conditions, we investigated Fab expression and accumulation into either periplasm or medium in E. coli K-12 and E. coli BL21 when grown in different types of media and under different aeration conditions. Results Small-scale Fab expression demonstrated significant differences in yield and ratio of periplasmic to extracellular Fab between different culture media and host strains. Expression in a medium with fed-batch-like glucose feeding provided highest total and extracellular yields in both strains. Unexpectedly, cultivation in baffled shake flasks at 150 rpm shaking speed resulted in higher yield and accumulation of Fabs into culture medium as compared to cultivation at 250 rpm. In the fed-batch medium, extracellular fraction in E. coli K-12 increased from 2-17% of total Fab at 250 rpm up to 75% at 150 rpm. This was partly due to increased lysis, but also leakage from intact cells increased at the lower shaking speed. Total Fab yield in E. coli BL21 in glycerol-based autoinduction medium was 5 to 9-fold higher at the lower shaking speed, and the extracellular fraction increased from ≤ 10% to 20-90%. The effect of aeration on Fab localization was reproduced in multiwell plate by variation of culture volume. Conclusions Yield and leakage of Fab fragments are dependent on expression strain, culture medium, aeration rate, and the combination of these parameters. Maximum productivity in fed-batch-like conditions and in autoinduction medium is achieved under sufficiently oxygen-limited conditions, and lower aeration also promotes increased Fab accumulation into extracellular medium. These findings have practical implications for screening applications and small-scale Fab production, and highlight the importance of maintaining consistent aeration conditions during scale-up to avoid changes in product yield and localization. On the other hand, the dependency of Fab leakage on cultivation conditions provides a practical way to manipulate Fab localization. Background Fragments of immunoglobulin molecules are widely utilized in therapeutic and diagnostic applications as well as in basic research. Unlike full-length antibodies, these smaller fragments, such as the antigen binding fragments (Fab) and single-chain variable fragments (scFv), are small enough to be produced in Escherichia coli. However, the yields of correctly folded, functional antibody fragments in E. coli are often relatively low and dependent on the type and primary sequence of the fragment. Yields in the range of 10-20 mg functional Fab fragments per liter of culture are generally considered good in shake flask scale [1][2][3]. Major challenges in bacterial antibody fragment expression are the assembly of separately expressed light and heavy chain to constitute the functional heterodimer and formation of the four intra-chain and one inter-chain disulfide bond [4]. Since the disulfides cannot be efficiently formed in the reducing cytoplasm of E. coli, antibody fragments are most commonly supplemented with a signal sequence that directs them to the more oxidizing bacterial periplasm for correct folding. Folded fragments may further leak from the periplasm into the culture medium, from which purification can be accomplished without cell lysis [4]. An alternative strategy is to use redox mutant strains with more oxidizing cytoplasm for folding of the fragments in the E. coli cytoplasm [3,[5][6][7], but these mutant strains tend to have poor growth that limits their capacity for protein production and scale-up to fermenter scale. Previously described approaches to improve antibody fragment yields in E. coli have mostly focused on the optimization of the expression construct and the target fragment itself. For example, co-expression of different accessory proteins such as the cytoplasmic DnaKJE chaperone [8] or periplasmic dithiol-disulfide oxidoreductases and prolyl cis-trans isomerases [9] have been reported to increase yields of Fab and scFv fragments. Fusion to maltose-binding protein (MBP) has been shown to not only increase solubility of antibody fragments [10,11], but also enhance secretion from periplasm into the culture medium in secretory E. coli strains [10]. MBP fusion [12] as well as thioredoxin [13] and SUMO fusions [14] have also been reported to improve scFv yields in the cytoplasm of redox mutant strains. In some cases yield may also be increased by engineering the amino acid sequence in non-binding regions of the fragment to reduce its aggregation tendency [15]. A few reports exist on the optimization of culture medium and strain selection for antibody fragment production. Nadkarni et al. [1] compared defined media with different carbon sources and induction strategies, and found Studier's lactose autoinduction medium to provide higher Fab yields than either glycerol-based defined medium with lactose induction or glucose-based defined medium with IPTG induction. The authors also compared two expression strains, BL21(DE3) and BL21 (DE3)-RIL, although these strains differ from each other only regarding rare codon utilization but not regarding carbon metabolism. The effect of inducer on Fab expression has also been studied in E. coli K-12 RB791, in which highest Fab yields were obtained by induction with either a very low IPTG concentration or 2 g l -1 lactose using glycerol as the main carbon source [16]. Supplementation of culture medium with L-arginine and reduced glutathione [17] or sucrose [18] has been described as means to increase yields of functional scFvs. Glutathione was suggested to improve reshuffling of incorrectly formed disulfides, while the effect of sucrose was hypothesized to be due to osmotic enlargement of the periplasmic space and consequently enhanced folding of the product as a result of reduced local concentration. Cultivation temperature has been reported to influence the secretion into the culture medium so that at lower temperatures the product is more efficiently retained in the periplasm [18]. In this study we aim to investigate the effects of host strain, culture medium and aeration conditions on the production and extracellular leakage of Fab fragments in shaken E. coli cultures by the example of Fabs binding specifically to N-terminal pro-brain natriuretic peptide (NTproBNP), an important diagnostic marker of heart failure that can be detected from serum by an immunoassay applying the anti-NTproBNP Fabs [19]. Three different culture media were compared, all of them containing complex nutrients, but differing in their primary carbon source as well as in induction strategy. In the Super Broth medium, peptides, amino acids and sugars of yeast extract constitute the main carbon source during IPTG-induced expression. In Studier's autoinduction medium [20], growth is first supported by glucose, and when glucose is exhausted protein expression is autoinduced by diauxic shift to lactose utilization, while glycerol is also coutilized as a major carbon source during expression. The third medium was the fed-batch-like EnBase® medium with IPTG induction. In this medium the primary carbon source, glucose, is gradually provided from a soluble polysaccharide by biocatalytic degradation [21,22]. The polysaccharide used in the current study is different from the previous reports in that it is also slowly utilized to some degree through the E. coli maltose-maltodextrin transport system (own unpublished results). The EnBase fed-batch -like medium has been successfully used for high-yield cytoplasmic expression of several non-disulfide bond containing proteins [22][23][24][25][26][27] as well as functional protein with multiple disulfide bonds [28,29], while in this study we apply this medium for the first time for periplasmic production of disulfide-containing proteins. We also compared two metabolically different E. coli strains regarding their Fab yield in the different growth media. Apart from differences in Fab yields, we also observed some peculiar effects on leakage of the Fabs into the culture medium depending on the type of medium, host strain, and aeration efficiency. Comparison of culture media in small scale Fab fragment expression in E. coli RV308 and E. coli BL21 was compared in three different media in 24 deep well plate (24dwp) cultures. Notable differences were observed in both the total yield and localization of the Fabs (Figure 1). The fed-batch medium provided highest total yields in both strains, and 60-75% of active product was found in the extracellular medium at 24 h after induction ( Figure 1 and Table 1). In the autoinduction medium, all four fragments were produced at high concentrations in E. coli BL21(DE3), but for three of the fragments the proportion of extracellular product (40%) was lower than in the fed-batch medium. Low levels of Fab activity were detected also in E. coli RV308 when cultivated in the autoinduction medium, even if this strain is a Δ(lac)X74 mutant and the expression must therefore be accounted to leakiness of the promoter. Fabs were most efficiently transported to extracellular medium when expressed in the Super Broth medium, in which 72-97% of product activity was measured in the extracellular fraction irrespective of fragment or host strain. However, the total Fab yields in Super Broth were much lower than in the other two media. Thus the small scale results suggest that the fedbatch medium is the most favorable medium for Fab production due to the high overall yield and efficient transport of the product to extracellular medium, as well as the robustness regarding strain type. The main reason for higher product concentration in the fed-batch medium compared to the autoinduction medium appears to be higher cell density (cell density data for one representative Fab are shown in Table 1) rather than notably higher productivity per biomass. A reliable calculation of product per biomass was however not possible on the basis of OD 600 , since visual observation of DNA aggregates in the medium at 42 h indicated some degree of cell lysis especially in E. coli BL21(DE3) cultures. Lysis was apparently one of the reasons for Fab release from periplasm to medium in E. coli BL21(DE3), and possibly also in E. coli RV308. Cultivation in Super Broth resulted in high final pH ranging from 8.0 to 8.5 (data for one representative Fab are shown in Table 1), while in the fed-batch and autoinduction media pH remained at a lower and more neutral range (6.6-7.2 depending on the clone and medium, except for the clones expressing Fab 1B10 which resulted in pH decrease to levels below 6.0; data not shown). The pH increase in Super Broth is in line with our earlier observations on pH development in complex media without added monosaccharide carbon sources [22,23], and likely limited both the final cell density and Fab yield. Medium composition, respiratory activity and Fab localization A separate small-scale cultivation was performed to study the influence of fed-batch medium composition on the dynamics of dissolved oxygen tension (DOT) during Fab fragment expression in E. coli RV308 ( Figure 2). The pre-induction medium composition was kept constant, and modification was achieved by addition of more nutrients at the time of induction. Switch from initially unlimited growth to fed-batch-like limited growth took place at 9-10 h, and DOT at the time of induction (18 h) was 80-100% in all cultures. The cultures that did not receive complex nutrient supplementation and more glucose-releasing biocatalyst at induction maintained DOT at 100% after induction (Figure 2a). Addition of complex nutrients and more biocatalyst at induction (18 h) resulted in increased respiratory activity, and consequently DOT remained at a lower level (20-30%) for a period of 8-10 h after induction (Figure 2b). The increased oxygen consumption by addition of complex nutrients and increased glucose release was associated with high Fab activity in the extracellular medium (66-73% of total Fab activity, Figure 2b; see also in Additional file 1: Table S1b), while in the cultures with lower respiration and 100% oxygen saturation the product remained mostly in the periplasm (Figure 2a; see also in Additional file 1: Table S1a). pH was maintained between 7.0 and 7.5 in both cases (data not shown). Though the independent effects of DOT, growth rate and metabolic changes on Fab localization cannot be evaluated separately in this experiment, the results demonstrate that in the fed-batch medium the ratio of periplasmic and extracellular Fab can be drastically changed by modifying the availability of carbon and nitrogen substrates and consequently the respiratory rate after induction. Influence of shaking speed on Fab yield and localization Expression of the Fab fragments in shake flask scale demonstrated that the yield and extracellular leakage can be influenced by modification of aeration efficiency via shaking speed. Cultures in the fed-batch medium were incubated at 250 rpm shaking speed in baffled Ultra Yield Flasks™ (UYF) up until induction, after which the speed was either reduced to 150 rpm (providing k L a~200 h -1 [30]) or kept at 250 rpm (providing k L a~500 h -1 [23]). Expression in E. coli RV308 at the lower shaking speed resulted consistently in higher yields of fragments F1, F16 and F32, even if there was some experiment-to-experiment variation in yield between replicates ( Figure 3). Reduction of shaking speed also resulted in significant changes in Fab localization so that most of the Fab activity was detected in the medium as opposed to the efficient periplasmic retention of Fab at 250 rpm ( Figure 3). This effect was observed for F1 and F32 in two out of three replicate experiments (A and C in Figure 3) at 150 rpm, while in the third experiment (B) there was much less leakage into the medium. Despite this inconsistency, which is may be caused by differences in oxygen uptake rate (OUR) between the replicates, the data suggest that there is a tendency towards higher extracellular Fab accumulation under conditions of lower oxygen supply. The extracellular proportion of fragment F16 was lower than for the other fragments, but consistently higher at 150 rpm compared to 250 rpm. Unlike the other three fragments, 1B10 leaked efficiently into the medium already at 250 rpm (data for 1B10 is shown in Additional file 2: Table S2a), and hence no difference in leakage was observed at different shaking speeds. The degree of cell lysis was estimated by total protein measurement from cell pellet and culture supernatant by Bradford assay. Comparison of the percentage of cell lysis (as estimated from the relative concentrations of total protein in the cell pellet and in the medium; see in Additional file 2: Table S2a for the lysis estimates) and the percentage of Fab found in the culture medium suggests that at 250 rpm the small amount of fragments F1, F16 and F32 detected in the medium was released by cell lysis and there was no notable leakage from intact cells. The higher extracellular Fab yield at 150 rpm was partly due to higher cell lysis, but as the percentage of lysis was much lower than the percentage of extracellular Fab it is apparent that there was also increased leakage from intact cells. Depending on the fragment, at least 20-40% of total functional Fab leaked into the medium without accompanying lysis at 150 rpm. The possibility that the reduction of extracellular Fab fraction at the higher shaking speed might result from Fab denaturation due to the very efficient and turbulent shaking was ruled out by demonstrating over 95% preservation of binding activity when Fab-containing cell-free broth was shaken at 250 rpm for 24 h (data not shown). Similar effect of shaking speed on yield and localization was observed for E. coli BL21(DE3) in the autoinduction medium, when cultures were performed in the UYF bottles with either 150 or 250 rpm shaking speed from the beginning. Total yields of F1, F16 and F32 were much higher at 150 rpm, and leakage of Fab into the medium also increased significantly at the lower shaking speed ( Figure 4). The degree of lysis was low at both shaking speeds, but percentage of extracellular Fab increased from ≤ 10% to 20-30% of total Fab activity when the speed was reduced from 250 to 150 rpm (see in Additional file 2: Table S2b for the lysis estimates and percentages of extracellular Fab). Total yield of 1B10 in the autoinduction medium was not affected by the shaking speed ( Figure 4), but extracellular Fab activity increased from 3 to 88% when speed was reduced to 150 rpm. E. coli BL21(DE3) cultures in the fed-batch medium released Fabs very efficiently into the medium so that irrespective of shaking speed 87-97% of total Fab activity was detected in the medium after 24 h expression period ( Figure 4; see also in Additional file 2: Table S2c for percentages). Cell lysis was also substantial, typically 40-50% (lysis estimates are shown in Additional file 2: Table S2c). Total Fab yields were higher at the lower shaking speed, but the effect was less prominent than in the autoinduction medium. Based on measurements at a few selected time points, pH was not significantly affected by the shaking speed in E. coli RV308 cultures (pH data are shown in Additional file 3: Tables S3a-S3c), and the differences in Fab yield and leakage are therefore not likely to be due to pH changes. In E. coli BL21(DE3), pH in the fed-batch medium was lower at the lower shaking speed, while in the autoinduction medium lower shaking speed contributed to consistently~0.4 units higher pH. The pH change in fed-batch medium had apparently no influence on the extracellular Fab ratio in E. coli BL21(DE3). Influence of culture volume on Fab yield and localization The finding that a change in shaking speed could so drastically influence Fab localization was unexpected, and we wanted to see whether this effect could be reproduced by modification of aeration efficiency via the culture surface to volume ratio. This was studied by varying the culture volume between 1 and 5 ml in the wells of a 24dwp. The results with Fab F1 expressed in E. coli RV308 in the fed-batch medium demonstrated significantly increased leakage into the extracellular medium with increasing culture volume (Figure 5a). The threshold was between 3 ml and 4 ml so that at 3 ml 92% of total Fab activity was retained in the periplasm, while at 4 ml 66% of Fab activity was found in the culture medium (the percentages of extracellular Fab are shown in Additional file 4: Table S4). When culture volume was further increased to 5 ml the total yield was reduced and anaerobic metabolism was indicated by a low pH (Figure 5b). These results with E. coli RV308 were reproduced in an independent repetition of the experiment. The data demonstrate that Fab localization may be drastically changed by a relatively small change in aeration efficiency, such as increase of culture volume by one third. In E. coli BL21(DE3) cultures in the autoinduction medium the influence of culture volume on total Fab yield was minor (Figure 5a). It is likely that even at the lowest volume (1 ml) oxygen supply was below the threshold that caused significant productivity loss in the shake flask cultures at high shaking speed. Increasing culture volume contributed to gradual increase in the extracellular Fab fraction from 8% in 1 ml culture up to 28% in 5 ml culture (Figure 5a; see also in Additional file 4: Table S4 for the percentages of extracellular Fab). As E. coli cannot grow anaerobically on glycerol, no acidification was observed in the autoinduction medium with increasing severity of oxygen limitation (Figure 5b). Timeline of Fab leakage To get a more detailed insight into the Fab release from periplasm to medium and the role of cell lysis in this, Fab accumulation and OD 600 profiles were recorded from 150 rpm shake flask cultures in the fed-batch medium with both expression strains. Fragment F1 was expressed as the representative fragment. Fab accumulation into the medium started at approximately 9 h and 5 h after induction in E. coli RV308 and E. coli BL21(DE3), respectively ( Figure 6). At the same time, Fab activity in the periplasm and OD 600 were both still increasing, which indicates that the culture was not yet in stationary phase and not susceptible to cell lysis. At 14 h after induction, the proportion of extracellular Fab of total Fab activity at the time was 30% in RV308 and 50% in BL21(DE3), which can be accounted to lysis-independent leakage. When cultivation was continued into stationary phase (past 14 h from induction), part of the cells lysed and released more Fab into the medium, as indicated by a reduction in OD 600 . In the end, 75% and 92% of total Fab activity was found in the medium in RV308 and BL21(DE3), respectively. Based on OD 600 , the degree of lysis between 14 h and 25 h was 25% in RV308 and 46% in BL21(DE3). However, decrease in OD 600 may also be partly due to shrinkage of cell size as the cells switch from active growth phase to stationary phase [31], and hence the degree of lysis may be slightly overestimated from the OD 600 data. Assuming 25% lysis in RV308 after 14 h, the maximum amount of Fab released by lysis is 0.25 × (total Fab activity at 25 hextracellular Fab activity at 14 h). Hence it is calculated that during the 25 h expression period at least 55% of total active Fab leaked into the medium without accompanying cell lysis. Correspondingly, the percentage of Fab leakage without lysis is estimated to be at least 65% of total Fab in BL21(DE3). The data demonstrate that Fab leakage in the fed-batch medium begins several hours before significant lysis, and thus it is possible to harvest extracellular Fab in the absence of cytoplasmic proteins by optimizing the harvest time. Combined with the data on oxygen consumption (Figure 2 the leakage of Fab in E. coli RV308 starts around the time when respiratory activity of the culture decreases and oxygen level increases. DOT recording in smallscale E. coli BL21(DE3) expression culture (data not shown) indicated that in this strain a similar decrease in oxygen consumption takes place at 4-5 h after induction, which also coincides the start of Fab accumulation into the medium. Discussion The finding that total Fab yields were reduced by high aeration was unexpected and contradictory to our earlier results with cytoplasmically expressed recombinant proteins in the fed-batch medium [23]. The largest effect of aeration on total Fab yield was observed in the autoinduction medium, in which total yield increased by 5 to 9-fold when shaker speed was reduced from 250 to 150 rpm. This is consistent with an earlier report by Blommel et al. [32], who demonstrated that protein expression in autoinduction media is highly dependent on the oxygenation state of the culture so that under oxygen limited conditions lactose consumption is preferred over consumption of glycerol, which in turn promotes earlier induction and higher total yield of the recombinant protein. The sensitivity of lactose and glycerol utilization patterns to oxygen availability could explain the adverse effect of high aeration on Fab expression in the autoinduction medium, but the reason for yield reduction under high aeration in the fed-batch medium is not clear. It is known that increased DOT can cause oxidative damage to recombinant proteins and their expression [33], but further studies would be needed to elucidate whether the observed reduction in functional Fab yield is due to oxygenation-dependent changes in the host metabolism or in the oxidative folding of Fab fragments in the periplasm. Interestingly, it seems that high DOT might be less detrimental to Fab expression in the fed-batch medium when the complex nutrient supplementation at induction is excluded and the post-induction growth rate is lower. It is commonly acknowledged that antibody fragments can leak from E. coli periplasm to culture medium [4,34], and that this leakage takes place especially during extended cultivation periods [35]. In this study we observed the accumulation of Fab fragments in extracellular medium to increase under conditions of lower oxygen RV308 EnBase Culture volume [ml] availability. This effect was observed in E. coli RV308 in the fed-batch medium, and in E. coli BL21(DE3) in the autoinduction medium. Part of the increased release of Fab into the cultivation medium can be accounted to increased cell lysis, but also leakage without lysis appears to increase significantly when aeration efficiency is reduced. The increase in Fab leakage could be due to the direct influence of DOT during expression, or due to changes in growth rate as a result of reduced oxygen supply. Growth rate has been reported to modify the outer membrane protein and lipid composition, and consequently influence the efficiency of periplasmic protein leakage [36,37]. In the study of Bäcklund et al. [37], increased growth rate in glucose-limited fed-batch contributed to higher product leakage into the extracellular medium, while according to Shokri et al. [36] the influence of growth rate may not be linear as they observed maximum leakage at a growth rate of 0.3 h -1 , below or above which leakage decreased significantly. Both cell lysis and leakage from intact cells were influenced by the growth rate, and these were accompanied by changes in outer membrane lipid composition so that maximum in unsaturated fatty acids and minimum in saturated fatty acids coincided with the maximum in protein leakage. Therefore, changes in growth rate may be at least part of the mechanism by which the modification of oxygen supply via shaking speed or surface to volume ratio influenced the Fab leakage in our study. Growth rate during Fab expression could also be a contributing factor to the differences in the ratio of periplasmic and extracellular Fab observed between the different growth media. Aeration can also influence the membrane lipid composition independent of growth rate, as has been earlier shown in chemostat cultures [38]. Decrease in aeration rate was reported to result in a decrease in unsaturated fatty acids and increase in cyclopropane fatty acids. An earlier study also reported similar changes in response to lower aeration rate [39]. The effect of these changes on protein leakage was not studied, but since both the decrease in unsaturated fatty acids and the increase in cyclopropane acids are known to reduce membrane fluidity they can be expected to contribute to reduced protein leakage. This seems contradictory to our findings that showed increased leakage at lower aeration rate even when the effect of lysis was subtracted. On the other hand, our data suggest that the beginning Fab leakage may coincide with an increase in DOT after a period of low oxygen saturation. Such a sharp change in DOT level contributes to substantial changes in the relative abundance of several outer membrane proteins [40], and this reorganization of the membrane structure could promote higher membrane permeability and leakage of the periplasmic product. Alternatively, it could also be the cumulative accumulation of Fab in the periplasm that eventually initiates leakage due to diffusive pressure, and decrease in OUR could coincide this moment as a result of the stress of high Fab accumulation on the cell and consequent decrease in growth rate. Since reduced aeration generally contributed to higher total yield of Fab, the diffusive pressure would be higher under these conditions. Also the increased transport of recombinant product to periplasm might in itself reduce the ability of the cell to transport structural elements to the outer membrane [36], resulting in a more permeable membrane structure allowing for higher diffusive leakage after sufficient product accumulation in the periplasm. However, total Fab concentration and leakage were not always correlated. In some cases increased leakage was observed without accompanying increase in total yield, suggesting that the leakage is more dependent on other factors than the periplasmic Fab concentration. These most likely include changes in the outer membrane composition due to either direct or indirect effects of DOT. Extracellular pH may also affect the membrane fatty acid composition and hence the leakage efficiency of periplasmic proteins. There seems to be a tendency towards higher percentage of unsaturated fatty acids and lower percentage of cyclopropane acids with increasing pH [39], which suggests that higher pH might promote higher membrane permeability. However, we observed that lower oxygen availability contributed to increased Fab leakage in E. coli RV308 also in the absence of notable pH change, while in E. coli BL21(DE3) a pH decrease caused by reduced oxygen supply in the fed-batch medium was not associated with changes in Fab localization. Moreover, reduced aeration efficiency had opposite effects on pH in the fed-batch and autoinduction media, whereas Fab leakage increased in both. Thus it seems that the effect of pH at least in the range of 6.4 to 7.4 is minor, if any, and aeration influences Fab leakage by other mechanisms. While further studies would be needed to confirm the independent effects of DOT, growth rate and pH on Fab leakage, our findings about the changes in Fab yield and leakage in response to aeration efficiency and medium composition have important practical implications for Fab production in shaken cultures. It is usually most straightforward to purify Fab fragments directly from the culture medium, and hence the goal in Fab production is in most cases to maximize the extracellular yield. Based on our results the extracellular Fab yield can be maximized by cultivation in the fed-batch medium with complex nutrient supplementation under moderately oxygen limited conditions. Both E. coli BL21 and E. coli RV308 are good hosts for the extracellular production in the fed-batch medium. However, maximum Fab accumulation in the culture medium requires long cultivation periods during which cell lysis takes place to a significant degree, resulting in presence of background cellular proteins in the medium. In this regard E. coli RV308 seems to be a convenient strain for production of extracellular antibody fragments, as it has lower lysis rate than E. coli BL21 but under sufficiently oxygen-limited conditions can release substantial amounts of Fab into the cultivation medium in a lysis-independent manner. In both strains, however, maximum recovery of extracellular product while minimizing release of cytoplasmic proteins is a matter of optimizing the harvest time. In some specific cases it may be preferable to collect Fabs from the periplasm, and the best strategy for maximizing the periplasmic yield seems to be expression in E. coli RV308 and the fed-batch medium with exclusion of the complex nutrient supplementation at induction. This approach minimizes Fab leakage and maintains higher overall yield than cultivation with the nutrient supplementation under high aeration conditions. Since maximum Fab expression is achieved at relatively low aeration rates, Fab production at larger scale could be well accomplished in vessels such as disposable wave-mixed bioreactors [41] that have lower aeration rates compared to stirred bioreactors. The enzyme-based fed-batch system should be well suited to larger scale Fab production as it has been demonstrated well scalable up to pilot plant scale [42] and applicable to disposable bag bioreactors [43]. Our results also highlight the importance of aeration rate as a cultivation parameter in laboratory-scale shaken cultures which are often performed without appropriate consideration of oxygen transfer. If the aeration efficiency and other factors contributing to oxygen saturation during cultivation are not controlled when the system is scaled up from one type or size of vessel to another, productivity of the culture may vary considerably due to changes in DOT. Changes in aeration can also result in surprising effects beyond expression yield, as was the case with periplasmic protein leakage in this study. Conclusions In conclusion, we demonstrated that the yield and leakage of Fab fragments are highly dependent on expression strain, culture medium, aeration efficiency, and the combination of these parameters. High yields of Fab fragments were obtained in both E. coli K-12 strain and BL21 strain in a medium with fed-batch-like glucose feeding, and in E. coli BL21 in a glycerol-based autoinduction medium. Regardless of strain and medium, maximum volumetric productivity was achieved under sufficiently oxygen-limited conditions. Also the leakage of Fabs into the culture medium increased considerably under lower aeration conditions. This dependency may cause gaps in reproducibility when scaling up or down if oxygen supply or consumption rate are changed, but it also offers a practical way to efficiently manipulate the ratio of product localization in periplasm and extracellular medium. Expression constructs The gene sequences encoding four different Fab fragments (Veijola et al., manuscript in preparation) against N-terminal prohormone of brain natriuretic peptide (NTproBNP) were each cloned to a modified pKK233 expression vector backbone (Veijola et al., manuscript in preparation) under the control of tac promoter, and transformed into E. coli BL21(DE3) and RV308 using standard cloning and transformation procedures. The pKK233 vector encodes resistance to ampicillin. The Fab fragments contained an N-terminal pelB signal sequence for periplasmic transport in both the heavy and the light chains. Additionally, a C-terminal hexahistidine tag was included in the heavy chain. Three of the Fab fragments (coded as F1, F16 and F32) bind to their epitopes near the C-terminal end of NTproBNP, while one fragment (coded as 1B10) binds near the N-terminal end of the antigen. The general layout of the expression construct is shown in Figure 7. Media Fed-batch-like cultivation conditions were provided by using the EnBase system with enzyme-based glucose release from soluble polysaccharide. The medium was prepared by dissolving EnPresso® medium tablets (BioSilta, Oulu, Finland) into sterile water. As described previously [22], the medium consists of mineral salts, MgSO 4 , thiamine, trace elements solution, soluble polysaccharide substrate, and a low amount of complex nutrients. After dissolution of the tablets, the medium was supplemented with 1 g l -1 glucose and pH was adjusted to 7.4 by adding 1.6 ml of 2M NaOH to each 100 ml of medium. Cultures in shake flasks were supplied with 0.6 U l -1 of the glucose-releasing biocatalyst (EnZ I'm, BioSilta) before inoculation. Screening cultures in 24 deep well plates were grown as a batch without biocatalyst until induction. At the time of induction, all cultures in 24 deep well plates and shake flasks were supplied with 3 U l -1 biocatalyst and the EnPresso Booster (BioSilta) providing complex nutrients (peptone and yeast extract). Super Broth medium with MOPS buffering (SB-MOPS) contained (per liter): tryptone 35 g, yeast extract 20 g, NaCl 5 g, MOPS 10 g; pH was adjusted to 7.0. SB-MOPS for pre-induction growth was supplemented with 2 g l -1 glucose, and for induction the cells were transferred to fresh glucose-free SB-MOPS. All media contained 100 μg ml -1 ampicillin for selective maintenance of the plasmid. For cultivation in the baffled shake flasks media were supplemented with 0.1 ml l -1 antifoam (Sigma 204). Deep well plate cultivations Culture media were inoculated with Fab-expressing clones with high cell density glycerol stocks (OD 600 of 30-70) to OD 600 of 0.1-0.15. Broth volume was 3 ml in round-bottom square-shaped wells of 24-deep well plates (24dwp; Thomson Instrument, Part No. 931565-G-1X), and the plates were covered with adhesive porous membrane seals (Thomson Instrument, Part No. 899410). All plate cultivations were performed at 250 rpm in an orbital shaker with 25 mm offset (Infors HT Multitron, Infors AG). Under these conditions, the approximate evaporation rate was 7% of original volume within 19 h and 23% within 42 h. The concentration of broth as a result of evaporation as well as the dilution of the fed-batch cultures due to Booster addition were both accounted for when calculating the results so that the evaporation and dilution effects were eliminated from the Fab concentrations. Cultures in the fed-batch medium were grown overnight at 30°C, followed by induction at 17 h with 0.2 mM IPTG and simultaneous addition of 10x Booster concentrate (to 1:10 v/v) and biocatalyst (3 U l -1 ). Incubation was continued for further 24 h at 30°C. Cultures in Super Broth medium were grown with 2 g l -1 glucose to OD 600 of 0.5-0.8. 2 × 3 ml cultures were then collected into a single vial, and cells were gently spun down at room temperature. Supernatant was discarded and the pellet was resuspended in 3 ml of glucose-free SB-MOPS with 0.05 mM IPTG (for E. coli RV308) or 0.2 mM IPTG (for E. coli BL21). IPTG concentrations had been previously optimized for maximum Fab expression in SB-MOPS (data not shown). The suspension was transferred back to 24dwp and incubated for 19 h at 30°C. Autoinduction cultures in ZYM-5052 medium were incubated for 41 h at 30°C. Shake flask cultivations For flask-scale expression, 50 ml cultures were inoculated with high cell density glycerol stocks to OD 600 of 0.1-0. 15 with IPTG, one EnPresso Booster tablet and 3 U l -1 of the biocatalyst were added. Shaking speed after induction was alternatively 250 rpm or 150 rpm. Cultures in autoinduction medium were incubated at 250 rpm or alternatively at 150 rpm. In all shake flask experiments, broth volume was measured every time a sample was taken. This data was used in the calculation of results to eliminate the effect of different evaporation rates from Fab concentrations and OD data. Cultivation with online oxygen monitoring An additional experiment was performed with the fed-batch medium in a 24 round-well plate with integrated optical oxygen sensors (OxoDish®, PreSens GmbH, Regensburg, Germany) in each well. The plate was placed onto SDR SensorDish® Reader (Presens GmbH), and the plate and reader were fixed to an orbital shaker with 50 mm offset. Cultivations were performed with 1.1. ml culture volume at 30°C and 200 rpm with online recording of dissolved oxygen tension (DOT) in 5 minute intervals. In these experiments Fab fragment F1 was expressed in E. coli RV308 in the fed-batch medium. The polysaccharide substrate in the medium was different from the previous experiments, and the peptone component was replaced by an animal-free peptone. Cultivations were started with 0.25 g l -1 glucose and varying concentrations of biocatalyst at pH 7.3. Cultures were induced with 0.2 mM IPTG at 18 h, and at the same time half of the cultures received Booster and more biocatalyst (3 U l -1 ). Cultivations were continued for further 24 h after induction. Monitoring of culture growth and pH Culture growth was monitored by offline cell density measurements at selected time points. Cell density was determined by measuring optical density at 600 nm (OD 600 ). OD 600 of 1 corresponds to a dry cell weight of 0.27 g l -1 . Culture pH level was monitored by offline measurements of 150 μl broth samples by IQ2400 pH probe (IQ Scientific). Determination of Fab expression level To quantitate the Fab yields, 100 μl broth samples were collected and centrifuged at 13,300 × g and 4°C for 4 min. The supernatant was collected into a separate vial, and the cell pellets and supernatants were both stored at −20°C. For cell disruption the pellets were thawed, suspended in 100 μl of BugBuster (Novagen) and lysed by addition of 2 μl Lysonase Bioprocessing Reagent (Novagen). Cell lysates and broth supernatants were prepared for analysis by centrifugation at 13,300 × g and 4°C for 4 min to remove cell debris and other insolubles. The quantity of functional Fab in the cell lysates and broth supernatants was determined by indirect ELISA. Immuno™ 96-well MaxiSorp™ plates (Nunc) were coated with 0.1 ml of 1 μg ml -1 thioredoxin-NTproBNP fusion antigen at 4°C overnight. The wells were washed three times with PBS + 0.05% Tween-20 (PBST), blocked for 20 min with 1% bovine serum albumin and 0.2% gelatine in PBST buffer (BSA-gelatine-PBST), and washed again three times. 0.1 ml of 1:1000 sample dilutions in BSAgelatine-PBST were added to the wells and left to bind for 1 h at room temperature, followed by eight wash cycles with PBST. Goat anti-mouse IgG (Fab specific)alkaline phosphatase (Sigma Aldrich) was diluted 1:5000 in BSA-gelatine-PBST and applied as the secondary antibody. The secondary antibody was incubated in the wells for 30-50 min, followed by seven wash cycles. To detect alkaline phosphatase activity, 0.1 ml of p-nitrophenyl phosphate solution (prepared from SIGMAFAST™ tablets, Sigma Aldrich) was added to the wells, and the absorbance at 405 nm was recorded with Thermo MultiSkan plate reader after 5 to 45 min depending on the signal strength. To convert the A 405 signal to Fab concentration in mg l -1 , purified Fab standards of known concentration were added to the plate to create a standard curve for A 405 against mg l -1 Fab. For each of the four Fabs, the standard curve was created with a purified solution of exactly the same Fab as the binding affinities to the antigen varied widely between the different Fabs. The periplasmic Fab fraction was analysed from whole cell lysate with the assumption that all detected Fab activity originated from the periplasmic space. It is assumed that only correctly folded and biologically active Fab fragments bound to the antigen and were quantified, and folding of the Fab to its functional form within cytoplasm is generally a very limited occurrence due to the unfavorable redox conditions. Cytoplasmic assembly to functional conformation would be virtually impossible also due to the signal peptide that is only cleaved during translocation to the periplasm. On these grounds all Fab activity in the lysate can be accounted to periplasmic Fab. This was also experimentally verified with a limited number of cell pellet samples by comparing the Fab activity in whole cell lysate and periplasmic extract generated via lysozyme treatment in cold sucrose solution (30 mM Tris-HCl, 1 mM EDTA, 40% sucrose, pH 8.0; Neu and Heppel [44]). Two hours incubation in the lysozyme-sucrose solution at +4°C was sufficient to release periplasmic proteins without lysing the cells. Analysis by ELISA confirmed equal yields of functional Fab in the lysate and the periplasmic extract (data not shown). Estimation of cell lysis To estimate the degree of cell lysis in shake flask cultures, the amounts of total protein in cell pellet
9,690.8
2013-07-29T00:00:00.000
[ "Biology", "Environmental Science" ]
Incorporating Culture into Listening Comprehension Through Presentation of Movies The use of movie videos as an instructional aid in the teaching of English as a foreign language (EFL) should be encouraged due to various pedagogical benefits. This article attempts to suggest a technique of utilizing movies in English listening classes in order to improve the aural perception skills of the learners. It comprises three stages:previewing, viewing, and postviewing. In the previewing stage, learners read a bried description of the theme of the movies to activate their prior knowledge, guess the meaning of certain keywords presented in sentential contexts, or familiarize-either with or without any subtitles-and while doing so they are supposed to answer several questions in written form. Finally, the learners are enganged into a postviewing activity in the form of contrasting cultures reflected in the movies. This technique of teaching listening has proved to be effective in developing listening skills in a foreign language and sentizing them to the target culture, which is an inseparable aspect of language learning. tutions. It is essential, however, that a review of the concepts involved in this technique--culture, language, and listening comprehension-precede the de- tailed explication of the technique itself.Consequently, this paper will be divided into three sections.The first section will briefly examine the interlock- ing nature of language and culture as it relates to EFL, then in the second one the processes that occur during aural comprehension of EFL will be re- viewed.Finally, the teaching technique will be presented in details. LANGUAGE AND CULTURE To date culture has been defined as various different-although some- what related--concepts, ranging from a group of people who share the samc background (Matikainen and Dufff, 2000) to the way of life of a sociery (Straub, 1999).As a mafter of fact, Kroeber and Kluckhohn (1954, as ciretl by Lessard-Clouston, 1997) obtained more than three hundred definitions ol' culiure in their investigation.This confirms the notoriously-hard-to-dcllrrc nature of the term 'culture', as asserted by Byram and Grundy (2002).I low- ever, it might be appropriate and practical to consider culture in both its ntrrow and broader senses as the dyad of little "c" culture and big"C" cultrrrc. The little 'oc" culture refers to the aspects of lifestyle or patterns of daily living, while the big "C" culture represents a civilization's accomplishmcnts ilr literature and the fine arts, its social institutions, its history, geography, trrrl political system (Herron et al, 2002).Pryor (2004), who argues that culture lies at the heart of curriculrurr, views culture as learned patterns of values, beliefs, perceptions and bclurv iors which are shared by groups of people but may be practiced dillbrcntly by the members in a particular situation.She further states that these culturnl paiierris can be observed iri ihe iangriage "rhey use, in ad<iition to orglrrizl tion, customs and material products.It is apnarent, then, that languagc ir value-laden; therefore, EFL learners definitely still grasp the values thnt ex ist in the target culture even if the culture is not taught explicitly in the lnlr- guage class (Lessard-Clouston, 1997).On this gtound it is quite reasorruhle tlrat Herron et al (2002) claim culture becomes the core of foreign lang,ulg.rinstruction. Apparently culture plays a vital role in EFL learning; therefore, it seems to be a wise suggestion to integrate it into language instruction, regardless of the language skills (listening, speaking, reading, and writing) to be taught.As the present paper deals mainly with listening, this particular language skill will be reviewed briefly in the next section, before the integration of culture and listening is discussed. LISTENING COMPREHENSION Formerly thought of as a passive language skill, listening was often as- signeci iess emphasis in EFL ciasses rhan rhe active skiii, speaking (Herschenhorn,1979).The label of passive language skill, however, is actu- ally a misnomer as listening requires active processing in the learners, mind despite the superficially silent activities of perceiving oral stimulus.When receiving such stimulus and then attempting to make sense of it, the learners interactively perform two types of cognitive processing, namely bottom-up (data-driven) and top-down (conceptually-driven). The bottom-up processing involves constructing meaning from the smallest unit of the spoken language to the largest one in a linear mode (Nunan, 1998).Thus, the leamers make an effort to understand a spoken dis- course by decoding a number of sounds to form words. Next, a nexus of words are linked to form phrases, which make up sentences, These sentences build a complete text, the meaning of which is then constructed by the lis- teners.In addition to the grammatical relationships, such suprasegmental phonemes as stress, rhythm and intonation also substantially contribute to this data-driven processing (van Duzer,lg97).Learners can be trained to perforrn this processing for instance, by activities that require them to dis- criminate two sounds or distinguish rising and falling intonations. The top-down processing, on the other hand, refers to interpreting meaning as intended by the speakers by means of schemata or structures of knowledge in the mind (Nunan, 1998).This view emphasizes the promi- nence of background knowledge already possessed by the learners in making sense of the information they hear.In the aural perception, the prior knowl- edge may facilitate their attempt to grasp the incoming informafion by relat- ing the familiar with the new one, and significant laik of such knowledge can hamper their efforts to comprehend a particular utterance.It is, therefoi, essential that leamers are accustomed to performing this processing, usually Kusumarasdyati, Incorporating Culture 57 by extracting the gist of the exchange they listen to. Thus, it is justified that listening is not a passive language skill, Con- trary to the misleading popular belief, comprehension through auditory channel requires some cognitive processes that interact actively in simultaneous manner.To aid the learners in performing these processes better, various materials should be carefully selected to ensure a conducive atmosphere of learning.Creating such an atmosphere can be easily conducted by selecting commercially available films in video format and employing them as listening materials in the language lab. PRESENTATION OF MOVIES IN LISTENING CLASS Movie videos should not be regarded as merely a peripheral 'extra' in a listening class; on the contrary, they can function as the core content and become an integral part of the curiculum (Sommer, 2001).Appropriate, crea- tive exploitation of these movies can reveal their potentials in fostering the acquisition of listening skills (Eken, 2003); therefore, their use as instructional materials in listening lessons should be encouraged due to at least four pedagogical values. The first benefit relates to motivation: films about issues that draw the learners' interest can positively affect their motivation to leam (Stempleski, 1992; Allan,l985; Lonergan, 1984).My experience of teaching the university students proved this to be true, Every time I dimmed the lights in the language lab at the beginning of the listening lessons, the students immedi- ately realized that a movie video was about to be played and they always clemonstrated enthusiasm, which would not diminish even after the movie cnded.Second, the movies assist the leamers' comprehension by enabling them to listen to exchanges and see such visual supports as facial expres- sions and gestures simultaneously (Allan, 1985;Sheerin, 1982), which may boost their insights into the topic of the conversations.In real life, unless lhey are speaking on the telephone or listening to the radio, such visual supports are virtually present to accompany the verbal exchanges, so the exis- tcnce of facial expressions and gestures in the movies can sirnulate the dialogues in real situations.In addition to the visual supports the films also pro- vide exposures to the language uttered in authentic settings (Stempleski, 1992; Telatnik and Kruse, 1982).This third benefit, i.e. authentic language, is cxtremely valuable to assist the students in preparing for the participation in the real conversations because the exchanges in the movies are very similar to the ones in real life in terms the rate of delivery, the choice of words and the tendency of truncations (such as elliptical structures and contractions), as opposed to the exchanges in the majority of commercial listening materials, which may sound quite arlificial.Finally, the movies present the cultural context of the conversations (Herron et al,2002; Chapple and Cur- tis, 2000), hence enhancing more appropriate use oflanguage and preventing cross-cnltural misunderstandings.Further, they can be a usefu!"spring- board" (Toplin, 2002) ta investigate the target culture. These advantageous aspects of movies as listening materials provide sufficientiy strong grounci for ianguage eciucators to have them shown in EFL instructions.Although presenting fuii-length movies in a classroom in- vites objections mainly due to the time constraint (Bluestone, 2000), after repeated practice I am confident in maintaining ihat this does not necessarily become a significant obstacle.In tertiary institutions where I have implemented this teaching technique, one session of listening lesson lasts for 100 minutes.Given this condition, apparently showing one entire movie (ap- proximately 90-l 15 minutes long) and doing the relevant exercises cannot be completed in a single session, but this can be easily overcome by splitting the presentation into two sessions.As a matter of fact, such a division of time can cater for invaluable opportunities for the leamers to exercise their power of imagination if we-the learning facilitators-are imaginative in as- sisting the iearners to construct knowledge from the aural perception. The movie can be presented in two modes: with or without subtitles. The decision to include the subtitles or otherwise in a movie presentation re- lies on the complexity level of the story and the nature of speech.If the theme and the piot appear to be too complicated to apprehend (for example, Dead Poets Society), it is a wise choice to show the subtitles as this will save the leamers from an arduous task, i.e. concentrating on the theme while at the same time grappling with the language.In addition to the complexity of the content, several attributes of the utterances also determines the results of such a decision, for instance, the density ofthe language and the rate and ac- cent of delivery.Characters who articulate excessively unfamiliar technical terms (such as legal jargons in Music,Bot), words with a particular accent (A Walk in Clouds), or fast speech (Next Stop Llronderland) can possibly ham- per the learner's efforts to comprehend, so I prefer to provide subtitles in their first language when such movies are on display.Kusumarasdyati, Incorporating Culture 59 To guide them in comprehending ideas from the orar input as well as strengthening their imaginative faculty, I devise a handout to accompany the movie viewing.This handout comprises three parts and reflects the siages that they undergo during the lessons: previewing, while-viewing, and post- viewing (Allan, 1984;underwood, 1989).A description of each stage will be elaborated below. Previewing It is a common practice in instructions on language decoding (including listening) that at this beginning stage the teacher spends a sufficient amount of time helping the learners build the appropriate schemata to facilitate com- prehension (van Duzer, 1997) .This conceptually-driven styie of teaching are believed to enable the learners to provide a 'hook' that relates the knowl- edge they already possess and the one to be acquired, making the acquisition occur more smoothly. Generaliy the previewing stage consists of two activities, namery, introducing the theme of the movie and preteaching the key vocabulary (Allan, 1985; Tomalin, 1986; Sheerin, 1982).Additionally, some teachers believe it rnight be quite fruitfulto familiarizethe learners with the main characters of the movie prior to viewing.Although chung and Huang (1998) found that preteaching the vocabulary is a better form ofadvance organizer than the de- scription of the main characters, based on my personal experience in listening classes the latter proves to be helpful in assisting the students' compre- Irension.working within this framework, at the beginning of the session I briefly describe the theme that underlies the whole plot of the movie, and also the presence or the absence of subtitles in the mother tongue. Afterwards, I administer a worksheet and an answer sheet, and have the lcai'ners scan the items in the worksheet ior a few a minutes to familiarize lhemselves with the learning activities to be carried out before, during and after viewing the movie.If the film is presented with subtitles, before watch- ing it the learners need to read the brief description of the theme and the rnain characters (Figure 1).Usually the activity of guessing word meaning is not included here because lexical items will be a part of the while-viewing. Part l: Before watching You are going to watch an interesting movie about Erin and Alan, two strangers who crossed paths several times without realizing each other's presence.Althaugh desiiny seerned to push them further and fuiher apaft, it had samething nice for Erin and Alan. Naw familiarize yourselves with the following characters to help you comprehend the movie. Erin : a nurse, who dropped out of Harvard medical school. Alan : a student of marine bioiogy, volunteering in the aquarium. Sean : an activist, Erin's iormer boyfriend.Piper .Erin's mother, who is eager to find a partner for her daughter However, if the subtitles are absent, in addition to describing the theme I review a number of keywords from the movie to cater for a scaffold that will assist them in the comprehensiorr later (Figure 2).Otherwise, the iearn- ers need to expend extra effort to understand what is happening in the movie and may give up disheartened if they fail to do so. Rather than simply telling them the meaning of these keywords, I prefer presenting thern irr seniential context and asklng the students to perform intelligent guessing to figure out the meaning of each on the basis of the con- text.Retention is expected to be better if they construct and discover the meaning themselves.The students are suoposed to do the vocabulary building excrcist.or;rllvIaskthem to brainstorm the meaning, givingthem a chance to cxprr.:,r;tlrt. inferred synonym or explain the definition to the rest of the clirss.ll r:r rlrt' lcamers who construct the meaning, and my role in this activity is rrrrrt:ly to Jrrovide feedback on the accuracy of their inference. While-viewing Immediately after the previewing stage, I engage thenr irr tlrc, r ote ,rr trr ity: viewing the movie.While doing so, they are supposcd to lrnrvt.r.,,,rrr,.ilcms in the worksheet in written form.Again, the prcscnc:c ol':irrlrtrtlt',i rrr the film determines the types of questions to ask in the workslrcet ll tlrr.lilrrr is shown with subtitles in the mother tongue, I ask thcm sollle (ln(.\tr()n., tr) theck their comprehension and also some others to irnplovc llrt'ir lrrrt;rl knowledge.A sample of worksheet items for the movie M,r/ ,\'trr7r ll'rrtrl:.tlundto exemplifu these two types of questions can be fcruntl in lir;iurr.I Part l: Before watching You are going to watch a movie abaut Ferris, a high school student who took one last day off before graduating.lt also portrays how he cleverly spent it and how it eventually changed his friends' and his own life in the end.Now guess the meaning of the wordgphrases below, using the context as a clue. When his sister broke his favarite ashtray, he went berserk.This is my father's daWpgnumber.You could call him in his office by dialing this number. The dowotewn area ls always busy because there are a lot of shops and offices there.1.What was the relationship between Lewis Castleton, the writer of "Heart I'Jeeds Home", and Erin? 2. Why did Erin often point to uuords in books randomly?F igure 3. While-viewing ltems of Worksheet for Next Stop llonderland If the subtities in the first language are not showrt, the items usually in clude comprehension questions only, as illustrated in the excerpt of worksheet for the movie Ferris Bueller's Day Olf inFigure 4' Part ll: While Watching At home 1.Why did Ferris cail Cameron that morning? 2. What were the tips given by Ferris's father over the phone?r Take a .Wrap a .Make r Geta At school 3. What did the school's nurse tell Sloane?l,igure 4. While-viewing Items of Worksheetlor Ferris Bueller's Day Off Each item consists of a brief description of the scene to refer to (written rrr italics) and one or more questions to be answered.The description of the ,;ccne assisis the learners to direct their atteniion to a particular spot in the rrrovie which is related to the question(s) being asked.For instance, the label rl' 'in the restaurant' above item number 1 in the worksheet for Next Stop ll'rtnderland prompts the students to become more alert when they see the :;cene of a restaurant on the screen and know immediately what specific in-Irrrmation to look for, i.e. the English translation of menawarkan tumpangan loll'er a ride) and kalau hilarzg, ya sudah (Lost is lost).Such an item is in-It:r16[s6 to increase their vocabulary size by encouraging them to match the lrrdonesian subtitles with what the film character utters in English. The comprehension questions are written in similar fashion--questions 1'r'cceded by a clue of the scene-unless the questions need to be answered b.y grasping the ideas andlbr inferring the answers from the entire movie.In tlrc latier case, the ciescripiion of ihe scene is occasionaiiy not requireci. In spite of the slight difference in the content of questions forthe films with and without subtitles, they are presented in approximatel;, the sarne r.vly.The following describe the complete activities to be done during the vicwing stage, First, allow the students one minute or two for a quick review ol lhe scenes and the questions written in the worksheet, so thal they have an itlt:a o1'the scenes to watch in the entire movie and can focus their attention on llre information to seek.Next, play the movie, and after each scene mentiorrcd in the worksheel pause for l5 to 60 seconds, depending on the length on his head of the required answer.During this pause, have them supply a correct, brief answer.Occasionally, after viewing a scene once the students still find it quite difficult to recognize the words spoke,n by the characters or understand their exchange, and request a repetition of that particular scene.In dealing rvith such a situafion, I should ernphasize that this exercise aims at enabling and guiding them to comprehend and construct meaning from utterances in the target language, rather than testing their listening ability.As a conse- qu"n.e. they deserve a second chance to vierv the scene in order to promote better comprehension. I have mentioned earlier in this paper that due to time constraint a par- ticular film has to be presented in two sessions, and this split turns out to be an advantageous point in the lesson rather than otherwise as it caters for an opportunity of stimulating the learners' imaginative capability.I al1vays fin- istr tne first session by stopping the tape or the disc when the story in the movie seemingly gets bleak and unpromising, then have the learners predict how the story will end.To illustrate, I press the 'stop' button of the video player in the following scenes: t Ferris Bueller's Da-v Of!. after Cameron Fry crashes his father's car out of the garage and severely damages it.o A Walk in the Clouds: after Victoria Aragon turns the bedroom light on and Paul Sutton watches the light from a distance in the vineyard. " Chocolat: after Anouk accidentally drops the container made of clay, and begs for an apology from Vianne, agreeing to leave.o Musie Box: after Ann Talbot discovers the old pictures hidden in the mu- sic box. They should write their predictions briefly (usually not more than 5 sen- tences) on the answer sheet.To do so, they invariably exercise their imagina- tive power to figure out what e.yents will be likely to occur ahead at the end of the movie based on the existing clues.It is definitely an interesting ex- perience for me to find how remotely different one prediction from another can be.Ii even takes me by surprise to learn that some of my students' pre' dictions often resemble the unexpected twist in the movie's ending ' This activity involving imagination closes the first session.In the next listening session (usually the following week), I play the rest of the tape/disc and have them continue answering the rest of the items. Postviewing Upon completing the while-viewing activities, they piroceed to the post- viewing ones.By this time they have already seen the end of the movie and can verify the written result of their prediction against the actual ending.De- spite differences that may come up between these two, all of the students' work must be appreciated.The exact or approximate match between what the learners have guessed and what actually occurs in the movie does not matter much.It is the process of arriving at the predicted ending which should be acknowledged. After reviewing the results of the prediction briefly, the learners are en- gaged in the next postviewing activity, namely, examining the diversity across cultures, which could be done in two alternatives.One way to accomplish this is to have the learners identifr how the target eulture in the film differs significantly from their own culture.The inclusion of their home culture is essential as the awareness that they are members of a certain culture assists them in recognizing the "values, expectations, behaviors, traditions, c:ustoms, rituals, forms of gteeting, cultural signs, and identity symbols" (Straub, 1999) in their surroundings.Such awareness can lead them to inter- pret the same aspects of other cultures more objectively.To facilitate the students' attempt to contrast the cultures, I usually devise a table that can help them spot the distinct aspects of both.Figure 5 is an example of such an itcm taken from the worksheet for Ferris Bueller's Day Off. Part lll; After Watching You have seen some activities or events that reflect the American culture in the movie, and they are quite different in lndonesia.ln the left column, Iist these typicaily American activities or events.ln the right column, write the equal ones in lndonesia.This item is designed to allow the learners to contrast the American cul- ture-where all the acts and events in the movie take place-and the Indonesian culture in which they have lived and been raised.The results indicate that most of them can accurately describe how dissimilar the two ctlltures view parent-children relationship, teacher-students relationship, friendship, and other social issues. Another method of examining diversity across cultures is making use of the cultural issues depicted in the films.If the film happens to be rich of cross-cultural materials to dig up, such as A Walk in the Clouds where the Mexican culture encounters the American one, i construct an item that di- rects the learners to delve into the distinct manners in which two cultures treat the same issue in the movie (Figure 6). Part lll: After watching Paulwho came from Maligne (lllinois) had difficulties in adjusting to the life of the Anagons who were of Mexican origin and lived in Napa (California).List a few things which Paul viewed differently from the Arragons.and therefore caused cross-cu ltu ral m i s u n d erstand i n g. Kusumarasdyat i, lncorporating Culture 67 F'igure 6. Postviewing Item of the Worksheet for.4l(alk in the Clouds Schroeder (2004) points out the great linguistic diversity thar exists in the U.S.A. and results from the influence of different cultures there.This serves as a reminder that cultural diversity is existent not only between one country and another, but within a single country as weil.EFL learners need to have knowledge on this to prevent misperception that the English nativespeakers live in a community who shares a uniform culture in different parts of one particular country. It is strongly recommended ttrat this activity of contrasting be followed by an assenion from the teacher that cultures simply differ and none is supe- rior to the others.This is especially vital as some leamers may tend to hold an inaccurate opinion that their own culture is "right" and "full of polite- rtcss", whereas the others are "wrong" and "full of unacceptable values".l'hey should be made aware that diversity among cultures must be highly valued and respected, and such appreciation and unprejudiced attitude are cxtremely beneficial when they are learning a foreign language as language is inseparable from the culture where it is spoken. Figure l.Previewing ltems of Worksheet for Next Stop l(onderland Figure 2 . Figure 2. Previewing Items of Worksheet fqr Ferris Bueller,s Day Of| expression did Kevin use to asK his friend to speak with lower voice?ln the aquarium 3. Frank asked Allen to e B. COMPREHENSION the fish.
5,633.6
2015-09-03T00:00:00.000
[ "Education", "Linguistics" ]
Effects of field enhanced charge transfer on the luminescence properties of Si/SiO2 superlattices The effect of an externally applied electric field on exciton splitting and carrier transport was studied on 3.5 nm Si nanocrystals embedded in SiO2 superlattices with barrier oxide thicknesses varied between 2 and 4 nm. Through a series of photoluminescence measurements performed at both room temperature and with liquid N2 cooling, it was shown that the application of an electric field resulted in a reduction of luminescence intensity due to exciton splitting and charging of nanocrystals within the superlattices. This effect was found to be enhanced when surface defects at the Si/SiO2 interface were not passivated by H2 treatment and severely reduced for inter layer barrier oxide thicknesses above 3 nm. The findings point to the surface defects assisting in carrier transport, lowering the energy required for exciton splitting. Said enhancement was found to be diminished at low temperatures due to the freezing-in of phonons. We propose potential device design parameters for photon detection and tandem solar cell applications utilizing the quantum confinement effect based on the findings of the present study. Room temperature visible spectrum luminescence in quantum confined Si was reported as early as 1990 1 . Since then, Si nano crystals (SiNC) have been used as sensors and light emitting devices [2][3][4][5] . In such applications, the exploitation of the quantum confinement effect has allowed for tailoring the emission wavelengths and shifting the absorption edge [6][7][8] . One of the advantages of the SiNCs produced using the superlattice approach is the ability to directly grow NCs on the substrate along with an oxide layer providing a passive barrier against the spontaneous oxidation of Si 9,10 . In the case of solution synthesized NCs, this constitutes a technical limitation that would have to be overcome through the introduction of additional processing steps like passivation coatings and the construction of core-shell structures [11][12][13] . However, this advantage comes at a trade-off, wherein carrier transport through the superlattice is hindered by the dielectric 14 . This poses a hurdle to be overcome for applications such as current generation and electroluminescence where being able to run a current through the stack is imperative. While SiNC based photovoltaic cells alone would not improve the industrial scale power generation efficiency of such devices enough to warrant the utilization of a more complicated production method, they can be used in tandem devices with bulk Si. In such applications being able to tailor the band gap using the quantum confinement effect could increase efficiency by addressing the non-absorption losses 15 . Such tandem devices have been realized, albeit with the top cell limiting the short-circuit current 16 . There remains a need for further studies of the exciton generation and charge transport mechanics at the NC oxide interfaces. Additionally, the ability to control the absorption band edge is promising for infrared sensing applications 17 . This could allow for all-Si monolithically integrated wavelength selective photo detector arrays. Such color imaging devices relying on the quantum confinement effect have been realized with Si nanowires but, to the best of our knowledge not with quantum dots 18 . In order for a photo-current across the superlattice to be detected, excitons need to be split before recombination and the carriers need to be transported to the contacts 2 . Previously, the radiative lifetimes for SiNCs of similar size were experimentally determined to be 50 μs at RT and 250 μs at 80 K 19 . The transport of carriers through the dielectric in such systems has been studied experimentally 14 and within the context of polaron transport theory 20 . In vacuum, electrons can move freely as long as there is no potential barrier they cannot overcome. However, the key difference in the case of transport through a dielectric where electron-phonon coupling is higher, is that an electron moving through the dielectric would polarize the medium, carrying its own polarization as it travels 21 . Furthermore, the charging of a NC by an electron would also produce a change in its energy from the neutral state 22 . This necessitates the energy level that an electron is migrating to be lower than www.nature.com/scientificreports/ the energy level of the state it had initially occupied. Such conditions can be achieved through the application of an artificial external electric field across the medium. To this end we have studied the migration of photo-generated carriers through the superlattice under the effect of an external electric field of up to 2.5 MV/cm and as a function of the barrier oxide layer thickness (varied from 2 to 4 nm) with fixed NC size (3.5 nm). This was achieved by recording the emission spectra of the respective ensembles of SiNCs, analyzing the charge transport mechanics by investigating the change in photoluminescence (PL) intensity and comparing luminescence characteristics to electrical measurements. Methods Two sample sets (passivated and non-passivated) comprised of twenty-five 3.5 nm SiNC layers with inter-layeroxide barriers in between were prepared using a superlattice deposition process. The bilayer stacks were produced by alternatively depositing layers of silicon-rich-oxide (SiO 0.93 ) and stoichiometric SiO 2 through plasma enhanced chemical vapor deposition (PECVD) on Si substrates. The barrier oxide thicknesses were systematically varied from 2 to 4 nm by repeating the PECVD steps whereas the NC layer thicknesses were kept constant (3.5 nm). Additional thicker blocking oxide layers of 10 nm were also deposited at the top and bottom of the stacks. For current density measurements, samples with thinner (1 nm) inter-layer-oxide barriers and superlattices composed of fewer layers (18 bilayers) were fabricated. These samples were made without the 10 nm blocking oxide layers at either end to allow for carrier injection at lower fields. The superlattices were annealed at 1150 °C for 60 min in inert atmosphere (N 2 ) to phase separate the silicon rich oxide and the NCs. For the passivated sample set, an additional step in H 2 /N 2 atmosphere was added for 60 min at 450 °C. This step was shown to radically reduce the density of the surface defects at the Si/SiO 2 interface 23 . For transparent top contacts, 300 nm thick ZnO layers were added by atomic layer deposition. Aluminum layers of 300 nm were used for the other contacts, directly evaporated on top of the superlattices or on the back side of the highly As-doped wafers with resistivities measuring less than 5 × 10 −5 Ω m. The samples were mounted on a vacuum cryostat and PL spectra were recorded under the excitation of a 325 nm CW He-Cd laser with a power density of 0.65 mW/cm 2 . The measurements were carried out both at room temperature (RT) and at 80 K using liquid N 2 coolant. To produce the field across the samples, a forward DC bias (accumulation regime) was applied at the contacts and the voltage was varied. Behavior in the inversion regime was not investigated since a rectifying behavior due to the lack of minority carriers was previously reported in similar structures 24 . It is significant that the field values reported are estimated average field values, calculated by modelling the NC layers as bulk media with average effective permittivity values. PL measurements with DC bias were done under zero current conditions. Current density measurements were done by applying a voltage ramp on a device analyzer and recording the current density, with and without excitation by a broadband light source. Results and discussion The current density plots in Fig. 1 show three different current regimes. There is an initial increase up to 0.1 MV/ cm, where the sample under excitation reaches a higher value. This trend is followed by a slower rate of increase in current density for both cases, with the sample under excitation consistently measuring several orders of www.nature.com/scientificreports/ magnitude higher. Past 2.0 MV/cm the two curves merge and a steady exponential rate of increase in current density is observed for both cases. In the absence of the blocking oxide layers at either end of the superlattice, carrier injection at the contacts produces a space charge limited current regime 25 . With broadband excitation higher carrier densities can be achieved due to excitons being generated and the current density measures higher 26 . Several mechanisms govern the current density in this initial regime. Due to the small size of the NCs strong Coulomb repulsion limits the current flow below a certain threshold voltage 27,28 . Alongside this mechanism is the discretization of the energy levels within the NCs due to quantum confinement. This, coupled with the fact that there is a small but non-zero variation of size within the ensemble of NCs makes the probability of tunneling between resonant states very low 29 . Therefore, electron-phonon coupling also governs the current density in this regime. Past 0.1 MV/cm the limit for space charge limited current is reached. However, further increase in current density is observed due to the contribution of a mechanism, whereupon charge transport across the inter-layeroxide barriers is assisted by surface defects acting as mid-band-gap traps. Unlike conduction band tunneling between two adjacent NCs tunneling from a mid-band gap defect requires an electron to overcome a lower energy barrier and thus the faster charge transport allows current density to be increased. In Fig. 2 the tunnelling of a valence band electron to a charged defect site and the transition of an electron from a defect to the conduction band of an adjacent NC are schematically represented. In the case of the measurement under excitation, a higher current density is observed due to the splitting of excitons generating free charge carriers. These carriers migrate across the superlattice adding to the overall current. However, at higher fields this contribution becomes relatively insignificant since the tunneling rate across the inter-layer-oxide barrier is so high that the overall current density is determined by the injection rate alone. In Fig. 3 PL spectra of two sets of samples are plotted as a function of the applied voltage comparing the passivated and un-passivated state behavior at RT. A clear decrease of the PL intensity is seen with increasing applied voltage in both cases. The peaks centered around 1.56 eV (795 nm) are characteristic of the quantum confined PL spectra of 3.5 nm NCs 30 . Comparing the PL intensity of the peaks, it is seen that the H 2 passivation step results in NCs with a luminescence intensity several times larger than that of the non-passivated ones. This enhancement of the PL intensity is due to the reduction of the surface defects at the Si/SiO 2 interface 31 . These defects act as electron acceptors. Non-radiative recombination in these deep level traps reduce the internal quantum yield. Through H 2 treatment the NC surfaces at the interfaces are passivated reducing mainly P b defects 32 . Since the density of the interfacial surface defects depends on the ratio of surface atoms to total atoms in a NC the enhancement in luminescence intensity is disproportionately larger for smaller NCs. There might be a shift of the emission peak to higher energies if a small size distribution is still present within the ensemble of NCs. As seen in Fig. 4 there is also a shift towards higher energies as the barrier thickness (i.e., inter layer spacing) is increased pertaining to the weaker coupling of the NC layers 19 . www.nature.com/scientificreports/ Similar structures have been shown to exhibit a red shift of the emission under the effect of externally applied electric fields which was explained as a result of the quantum confined Stark effect 33 . This effect becomes significant at higher field strengths but, we did not observe such a trend in the past on our samples for field magnitudes in the range used for the present study 34 . However, what is observed is a significant reduction in the peak intensities as the voltage increases. In Figs. 5 and 6 the changes in PL intensities in relation to increasing field strength are shown as the evolution of the integrated area under the emission peaks. In both Fig. 5a and b a distinct trend of decreasing integrated PL intensity is seen as the field magnitude is increased. This trend is limited to samples with inter-layer-oxide barrier thicknesses less than 3 nm and is much more pronounced for those with thinner barriers. At field magnitudes surpassing 1.5 MV/cm the curves flatten but the decrease continues, especially for the samples with 2 nm and 2.5 nm barriers in (b). The passivated samples in (a) flatten out the most, where the one with 3 nm barriers shows almost no decrease past 1.75 MV/cm. www.nature.com/scientificreports/ While the trends in (a) and (b) are similar, the PL intensity drops to a minimum of 63% for the non-passivated sample with a barrier oxide layer thickness of 2 nm, whereas this value is only 90% for its passivated counterpart. Another significant difference between the two sets of plots is the late onset of the intensity reduction effect on the passivated samples. While the intensity drop starts almost immediately (0.06 MV/cm) for the non-passivated samples, for those that have undergone the H 2 treatment, no significant reduction is seen until the field strength reaches 0.35 MV/cm. In Fig. 6 similar trends of decreasing PL intensity at 80 K are seen up to 1.5 MV/cm. Samples with thinner inter-layer-oxide barriers and samples without H 2 passivation exhibit the highest decrease in luminescence intensity. However, unlike the measurements done at RT these decreasing emission trends in both Fig. 6a and b come to a halt and reverse direction. This effect is seen most clearly for the passivated sample with 2 nm interlayer-oxide barriers, where the PL intensity at 2.25 MV/cm matches that of 1.0 MV/cm. A comparison between Figs. 5a and 6a, also shows that the onset of the reduction effect is shifted from 0.35 to 0.20 MV/cm as the samples are cooled down to 80 K. www.nature.com/scientificreports/ Considering that the dependency of the PL intensity on the applied electric field is found to be conditional on the thickness of the inter-layer-oxide barriers being less than 3 nm, one can reason that charge transfer between the NC layers is regulating this behavior. There is an inverse relation between the thickness of the oxide and the tunneling current 35 . Another supporting argument for this case is provided by the fact that this luminescence reduction effect is enhanced when the surface defects are not passivated by H 2 treatment. As the current density measurements show, surface defects play a significant role in charge transfer across the NC layers in SiNC superlattices through the trap assisted charge transfer mechanism mentioned earlier. The contribution of this mechanism was also reported in studies concerning charge transport in Si/SiO 2 superlattices 14 . While this is not the only significant mechanism, the effect of the surface defects on the Si/SiO 2 interfaces in particular, can be verified in a controlled manner allowing for the confirmation that the luminescent intensity is reduced further if charge transport through the superlattice is enhanced. This is significant since the reduction in the luminescence can be directly linked to exciton splitting. Without the externally applied field the PL intensity is defined by the absorption cross section and the internal quantum yield of the NCs, which in the ideal case, would result in a one-to-one ratio of generated excitons to emitted photons 36 . With the applied field, a fraction of the excitons can be split and migrate across the superlattice effectively charging NCs. A charged NC can also absorb incident photons, generating hot electrons but since Auger recombination lifetimes are several orders of magnitude shorter than that of PL these NCs would not contribute to PL emission 37,38 . This is demonstrated by a first order approximation of a rate equation model where the PL and Auger recombination rates are defined in terms of their respective lifetimes ( τ PL and τ A ) the density of photo-generated excitons ( N 1 ) conduction band carrier density ( D c ) and the exciton generation rate ( G): where Note that only the single exciton generation case is considered and the internal quantum yield ( η ) is set to one for simplicity. Using experimentally determined literature values for τ PL and τ A at room temperature and solving Eqs. (1) and (2) for an excitation power density of 0.65 mW/cm 2 with complete absorption yields PL intensity as a function of the conduction band carrier density, D c19, 39 . In Fig. 7a simulated PL spectra for an ensemble of 3.5 nm NCs with a standard deviation of 0.2 nm are plotted using different values for D c . The emission peak centers were derived from experimental values obtained in previous studies on similar NCs linking the bandgap at 0 K to NC size and the temperature dependent bad gap broadening phenomenon 30,40 . This approach shows that an increase in the density of conduction band electrons would result in a reduction in PL. Without charging the Auger recombination rate would be significantly lower than that of PL since it is a three-particle interaction and the density of conduction band electrons in intrinsic Si is several orders of magnitude lower than the values in Fig. 7b (7.4 × 10 9 cm −3 ). In the case of charged NCs the presence of high energy electrons in the conduction band increases the Auger recombination rate and the PL signal is reduced by Auger quenching. Auger recombination would also reduce the conduction band carrier density while exciton splitting charges NCs. The effect seen in the steady state PL measurement is the equilibrium reached between these two mechanisms. In this case the reduction in PL intensity can be ascribed to the fraction charged NCs not contributing to PL at any given time. There are several factors affecting the onset of exciton splitting. In the case of a sample that has undergone surface passivation only a small amount of surface defects take part in charge transport. The current is mainly regulated by the rate of inter-NC polaronic direct tunneling, therefore a sufficient field strength across the oxide has to be reached for exciton splitting. Without the H 2 passivation process, the onset is near immediate and the signal reduction effect itself is almost quadrupled. This is ascribed to the trap assisted charge transfer mechanism allowing for much faster charge transport rates through the oxide than direct tunneling alone would. The early onset of the luminescence reduction effect (i.e., exciton splitting) is due to the enhanced carrier transport achieved through the utilization of the surface defects. However, the carrier transfer enhancement provided by the presence of the defects is temperature dependent. Tunneling of an electron from the conduction band of a NC to a surface defect is accompanied by the absorption of phonons 41 . At 80 K, with the phonons frozen-in, charge transfer rate through this mechanism is severely reduced, explaining the delayed onset of the reduction in PL intensity seen in Fig. 6b. The luminescence reduction, also does not reach the same level when measured at 80 K. Furthermore, the luminescence intensity reaches a minimum around 1.75 MV/cm and recovers. To explain this behavior, the development of the field across the entirety of the sample and within the superlattice needs to be examined in detail, using PL measurements in tandem with a model of the local field magnitudes. In Fig. 8 effective local field and potential values obtained using the experimental parameters with a finite element analysis model are shown. In these the NC layers are considered to have the properties of a bulk medium with an average effective permittivity value equal to the weighted average (by volume) of that of its constituents. As it is in the case of samples without blocking oxide layers at either end of the superlattice, once the field has reached a threshold value an exciton splitting is achieved. The factors effecting this magnitude of this threshold value is explained above. At fields lower than the exciton splitting threshold, the band structure across the sample (1) www.nature.com/scientificreports/ is biased as depicted in Fig. 8a. The voltage drops uniformly across the entirety of the sample, with the highest field magnitudes being reached in the oxide at the Si/SiO 2 interfaces There is no charge accumulation in this regime, no charged NCs and no change in PL intensity. Past the threshold field value, excitons are split and carriers migrate across the superlattice. These carriers charge NCs, leading to a reduction in PL intensity. Due to the presence of the blocking oxide barriers, charge accumulates at either end of the superlattice. This charge accumulation generates an induced field counteracting the externally applied field. Generation of an induced electric field leads to a reduction on the magnitude of the effective average field "felt" by the NCs in the superlattice. However, this reduction is not unlimited, since it reduces the effect of its own driving force. The value can drop until the average field in the superlattice excluding the blocking oxide layers reach the exciton splitting threshold value, at which point a steady state balance is reached. Since the external field is a controlled experimental parameter, the reduction is compensated for, by the increase of the field across the blocking oxide barriers (cf. Fig. 8d-II. The voltage drop is not uniform and the highest field values are inside the oxide immediately at the interface between the charged NCs and the blocking oxide barriers. In the steady state PL measurements, this correlates to the PL decrease regime that can be seen up to 1.5 MV/cm in Fig. 5a. At higher fields, this uneven distribution of field strength results in local effective field values in the blocking oxide barriers getting much higher than that of the average field across the entire sample. In Fig. 8d-III it is seen that the field in the blocking oxide reaches 3.24 MV/cm while the average field is only 2.0 MV/cm. At these high field values the accumulated charge flows into the contacts through field emission of electrons across the 10 nm www.nature.com/scientificreports/ blocking oxide. In the steady state, this results in a smaller number of charged NCs leading to a flattening of the reduction in Fig. 5a and the recovery of the PL intensity in Fig. 6a. Exciton splitting energies can be calculated using the threshold field magnitudes extracted from Figs. 5 and 6 by extrapolating the decreasing trend with a linear curve fit and taking the intersection at unity as the threshold. The threshold field value (F s ) in this case, is the field value necessary for exciton splitting and does not get reduced by the superimposed internal field. where d stack is the total thickness of all the NC layers and barriers in between and q e is the elementary charge. Here, unlike the exciton binding energy as an intrinsic property of the bulk material, exciton splitting energy is defined as the energy required to split an exciton and transfer the charge across the superlattice, which includes the exciton binding energy but also the work done to move the charges across the superlattice. These values in Fig. 9 show that under all experimental conditions there exists a correlation between barrier thickness and increasing exciton splitting energy. This phenomenon is explained considering the polarization www.nature.com/scientificreports/ of the dielectric as an electron is transferred between two NCs. With shorter inter layer distances (i.e., barrier thickness) a thinner section of dielectric needs to be polarized by a migrating electron. This reduction of the work needed results in lower exciton splitting energies for samples with thinner barrier oxides. On the other end of this scale, for samples with barrier oxides thicker than 3 nm, the exciton splitting energies are so high that no PL reduction effect is observed. It is also seen that the exciton splitting energies are invariably lower at 80 K for all samples. This can be explained by the dependence of the relative permittivity of the medium on temperature. The exciton radius scales linearly with the permittivity of the medium therefore higher permittivity at lower temperatures reduces the exciton splitting energy 42 . Ultimately, through the analyses of PL measurements, it was shown that the application of an electric field across a SiNC superlattice under excitation causes exciton splitting and the charging of NCs with these free charge carries. Charged NCs accumulate at either end of the superlattice and do not contribute to PL. In the steady state measurements, this is measured as the reduction of luminescence intensity. At higher field magnitudes field emission transfers the accumulated charge to the contacts. With the introduction of contacts between the blocking oxide layers and the superlattice itself, this platform can potentially be used as a photo-detector where the absorption band edge can be tailored by exploiting the quantum confinement effect. The exciton splitting and accompanying charge accumulation requires a tunneling current across the barrier oxide which was found to be enhanced by a trap-assisted transfer mechanism. This enhancement allows for carrier splitting at lower fields in NCs with higher surface defect densities. The exciton splitting threshold in NCs is also influenced by temperature where the trap assisted charge transfer mechanism is hindered at low temperatures due to the freezing in of phonons. Lowest exciton splitting thresholds can be achieved at room temperature without surface passivation. However, it should be noted that for photovoltaics applications surface passivation is still necessary. The current density enhancement provided by the increased surface defect density is not within the range of the enhancement in quantum yield provided by the lowered surface defect density the passivation process achieves 43 . Considering the case of a tandem solar cell in series, the potential across the superlattice would have be limited to below 1.12V 44,45 . To achieve the necessary 0.35 MV/cm field strength the thickness of the NC top cell would have to be below 32 nm, allowing for a maximum of five bilayers. With these conditions met, higher yield could be achieved without limiting the short-circuit current of the tandem device. These values apply only to the specific case of the structures in the present study. Using the same experimental method, these values can be determined for any Si/SiO 2 superlattice with different barrier thicknesses and NC size. Data availability The data that support the findings of this study are available from the corresponding author upon reasonable request. www.nature.com/scientificreports/
6,175.4
2022-02-16T00:00:00.000
[ "Physics" ]
Learning Mathematics Through Mathematical Modelling Processes Within Sports Day Activity In this study, we adapted the school sports day to provide opportunities to relate real-life situations with mathematical knowledge and skills. The purpose of this study was to describe the way that the teachers interact with their students and the students’ responses during mathematical modelling processes. The designing of the modelling task was inspired by the Realistic Fermi Problems about the bleacher in the school sports day. The modelling task was designed by a collaboration of mathematics teachers and educators and experimented with 10th-grade students. Each experiment lasted for 45 minutes and was conducted in the one-day camp with 45 students. The results showed that the students who had no previous experience of mathematical modelling engaged in mathematical modelling processes with their friends under the guidance and supporting of the teacher. Most of them were able to think, make assumptions, collect data, observe, make conjectures and create mathematical models to understand and solve the modelling task.    Introduction In several countries, the promotion of science, technology, engineering, and mathematics (STEM) education is an essential educational topic that enables students for a scientific and technological society. One of the important teaching and learning approaches for the transition to STEM education and interdisciplinary mathematics education is mathematical modelling (Borromeo Ferri & Mousoulides, 2017;Tezer, 2019). Besides, mathematical modelling can be considered good examples of STEM integration (Kertil & Gurel, 2016). Actually, mathematical modelling supports mathematical learning and enables students to deal with real-world problems (Blum & Borromeo Ferri, 2009). Although the ability to apply mathematical knowledge and solve real-world problems has been recognised in the Basic Education Core Curriculum in Thailand, both students and teachers have little experience in mathematical modelling. Moreover, about 50 per cent of 15-year-old Thai students did not achieve the international basic proficiency level (level 2) at mathematical literacy in PISA 2009 and PISA 2012. These results show that Thai students lack the ability to connect realworld problems with mathematics (Klainin, 2015). Hence, we were interested in observing and describing how the students respond to modelling activities. To achieve that purpose, we started by designing tasks to encourage students to connect inside and outside classroom mathematics and foster their ability to solve real-world problems with mathematical modelling processes. We adapted the idea of using realistic fermi problems about the bleacher in the school sports day, closely related to Thai students' experiences, to introduce mathematical modelling in upper secondary mathematics following Ärlebäck and Bergsten (2013). The questions we aim to answer in this paper are as follows: 1. How does the teacher interact with students during the mathematical modelling task? 2. How do the students respond to mathematical modelling task? This study aims at describing the students' responses to the modelling task and the teacher's interaction with their students during mathematical modelling processes. Mathematical modelling processes Mathematical modelling is an essential educational topic that fosters the students' ability to deal with real-world problems (Blum & Ferri 2009). Mathematical modelling processes showed how the process connects real-life contexts and mathematics content. It might look different and highlight different perspectives depending on the research's purpose and focus (Blum, Galbraith, & Niss, 2007). In this study, we adapted an ideal modelling process that includes six phases allowing cognitive activities to solve the modelling tasks described by Ferri (2006) Based on Ferri's model, modelling processes consisted of two components: phases and transitions that intertwined between two domains, reality and mathematics. The six phases comprise a real situation, mental representation of the situation, real model, mathematical model, mathematical result and real result. Simultaneously, the transitions include six activities: understanding the task, simplifying/structuring the task, mathematising, working mathematically, interpreting and validating. The realistic Fermi problems There are several important principles in model-eliciting activities (Lesh, Hoover, Hole, Kelly, & Post, 2000). Two of them are: 1) model construction principle, as in the problem must evoke the need to mathematise and model meaningful situation to solve a problem, and 2) reality principle, as in the problem need to be relevant and meaningful to the students; the kind that they might encounter in real life. Effective modelling activities should also be open, in a sense that there are no predetermined right answers. The students have the freedom to choose the most suitable mathematical concepts they will use to solve the problem. One of the notions that fit these principles is the Fermi problem. The term Fermi problem originates from the 1938 Italian Nobel Prize winner in physics Enrico Fermi . He had posed and solved problems like how many piano tuners are there in the US. He demonstrated that using a few reasonable assumptions and estimates could give accurate and reasonable answers (Efthimiou & Llewellyn, 2007). The Fermi problem was always answered by simplifying, making assumptions, estimating, and doing rounded calculations while the exact answer is often not available. It is the main important feature of the Fermi problem-solving process (Sowder, 1992). According to Ärlebäck and Bergsten (2013), the characters of Realistic Fermi Problems was described as follow: 1. the realistic Fermi problem does not necessarily demand any specific pre-mathematical knowledge. All individual students or groups of students can access and solve the problem; 2. the realistic Fermi problem is more than just an intellectual exercise. The context is realistic and presents clear real-world connection; 3. the realistic Fermi problem is open. The specifying and structuring of the relevant information and relationships are needed to tackle the problem; 4. the realistic Fermi problem does not show the numerical data. The problem solver needs to make reasonable estimates of relevant quantities, and 5. the realistic Fermi problem is to promote group discussion. Design of the Study The question was posited as the students' responses to the modelling task and teachers' interaction with their students during each mathematical modelling processes. Participants The study participants were divided into two groups. The first comprised three mathematics teachers and two mathematics educators interested in mathematical modelling, while the second consisted of 45 tenth grade students. These students had no previous experience in mathematical modelling in the classroom based on the Basic Education Core Curriculum of Thailand. Methods In designing the bleacher task, the task's objectives were to encourage students to connect inside and outside classroom mathematics and enhance their ability to solve real-world problems with mathematical modelling. We designed and validated the task by collaborating with three mathematics teachers and two mathematics educators interested in mathematical modelling. The task validation is based on four notions below. 1. The characters of Realistic Fermi Problems described by Ärlebäck and Bergsten (2013). 2. The information on the students' experiences and prior knowledge relevant to the task. This information was collected by conducting interviews with mathematics teachers who had experience teaching these students. The interview was important to confirm that the bleacher in the school sports day problem is both encouraging and engaging context for the students. 3. The framework for designing mathematical modelling learning experience described by Ang (2018). 4. The modelling cycle under a cognitive perspective described by Ferri (2006). The bleacher task is as follows: The bleacher task: In the school sports day, all students were organised into several groups with wide-ranging sports and varied performance abilities. Each group was labelled by colour to participate in sports, cheerleading and bleacher cheer show performance. You and your team members are given the relevant materials. Draw your own conclusions and answer these following questions: 1. How many students that can reasonably be arranged in the bleacher for cheer show performance? 2. What is the relationship between the sizes of bleacher and the numbers of students? In implementing the bleacher task, a qualitative approach was adopted. The teacher's role (one of the authors) was as a facilitator, encouraging and guiding a small group of students to deal with the bleacher task. The 45 students were divided into eight groups of 4 -5 students with wide-ranging abilities in mathematics. Each experiment lasted for 45 minutes and was conducted with two groups of students in the one-day camp. The students' responses and teachers' interactions during mathematical modelling process were gathered through observations, written work, and interviews. The data were analysed by content analysis according to the mathematical modelling framework by Ferri (2006). The result is presented in a narrative description. Results and Discussion This study showed the students' responses to the bleacher task and the way that teacher interact with their students during each mathematical modelling process as follows. Initially, the teacher and students met at bleacher in the school playground ( Figure 2). The teacher asked the student about their experience with the bleacher on the school sports day. All of the students participated in the conversation about the bleacher. The students were given the bleacher task. The teacher then gave the students time to understand the task and discuss what the problem wanted to know with their friends (Figure 3). Next, the teacher discussed the factors related to determining the number of students sitting on the bleacher to guide students to simplify the task and make assumptions. Most of the students' assumptions were as follows: 1. each student must be seated according to the size of the bleacher, fit, and not overcrowded; 2. the sitting position must look neat and straight; and 3. the sitting posture must be the same. Besides, one group assumed the distance between each student must be appropriate for movement in performance. Then, the teacher asked the students what the relevant and necessary data is, which we have to search for solving the problem. Some groups of students identified some unnecessary data (width and height of the bleacher). In these cases, the teacher asked them how the bleacher's width and height were relevant and necessary for solving the problem. They discussed and found that the bleacher's width and height were not necessary (Figure 4). On the other hand, most students were available to identify the relevant and necessary data (the size of the student who sat on a bleacher or area for sitting per person) as shown in Figure 5. During their search for the data, two groups have interesting reasons for determining the size of the student which connects between real-world and mathematics such as knowing the height and weight of the student to choose the right seated student, fit and balance to the bleacher, and sitting in a straight line. After that, the teacher guided the students to create a real model which simplified and structured students' mental picture ( Figure 6). They used drawings and diagrams to represent the bleacher's size, sitting position, and distance between each student. Some student groups identified conditions about sitting posture and performance on the bleacher such as space for placing cheer prop and students' size and height. The teacher then allowed the students time to discuss and think about the appropriate mathematical knowledge and concepts related to solving the problem. Moreover, they tried to sit ( Figure 7) and measure by using the measuring tape ( Figure 8) to collect data, observe, make conjectures, and create mathematical models to solve the problem under the teacher's guidance and support. The students used the bleacher sizing and the sitting experiments and obtained measurement from the different width of the body (the lap and shoulder). Some groups used either the actual measurements (2 decimal places) or approximate values from their measurements which round it to integers to make calculations easier (Figure 9). Finally, some groups used only one value from one measurement, while others choose to use the mean of values from their repeated measurements. Next step, the students worked mathematically based on unit rate ideas to represent the relation of seat length per person, then created the relation between seat length and the number of the students in general term. The teacher asked the students to interpret and check the mathematical results. Furthermore, students validated the result by comparing the solution with their experience. Finally, they have to share their problem-solving processes and explain the meaning of all variables in the equation based on the real-world situation to others, as shown in Figure 10. In this step, we found that some groups have mistaken in interpreting. For example, the students used the equation (y = 25x) to model the general relation between seat length (y) and the number of students (x). In this case, the teacher asked students to think about applying the student's model in a normally real-world context because actually we knew the size of the bleacher first then found the number of students. So, the teacher tried to lead students to concern about the independent variable (input) and the dependent variable (output) based on the real-world situation. It was evident from the result that the use of open and complex problems for a diverse solution, like Fermi problems, can be an effective modelling task. The problem can elicit the cognitive processes essential to mathematical modelling (Ferri, 2006) from the students. This is in line with the studies by Peter-Koop (2004) and Ärlebäck (2009), which demonstrate Fermi problems' potential to engage the students and encourage them through multiple modelling cognitive stages. Conclusion The description of the students' response and the teacher's interaction shows that typical Fermi problem, which is open, ill-structured, and possesses high-complexity, can effectively engage the students throughout the modelling task and elicit an appropriate cognitive response. The real-world context that is closed and related to students' experience also enhances students' engagement in the mathematical modelling processes with their friends under the teacher's guidance and support. It is recommended for the teacher to plan to deal with students in the modelling activity to know about the phases and their transitions in each modelling cycle because it is essential to guide and support them in dealing with a real-world problem.
3,213
2020-12-31T00:00:00.000
[ "Mathematics", "Education" ]
A Sitz for the Gospel of Mark ? A critical reaction to Bauckham ' s theory on the universality of the Gospels 1 A Sitz for the Gospel of Mark? A critical reaction to Bauckham's theory on the universality of the Gospels The aim of this paper is to evaluate the article by Ricluud Bauckluun, in which he challenges the current consensus in New Testament scho/Qrship that the gospels were written for and addressed to specific believing communities. The thesis that Bauckluun puts forward is that the gospels were written with the intention of being circuUlted as widely as possible it was written for every Christian community of the Ulte first century where the gospels might circuUlte. First, a Wirkungsgeschichte of Mark's gospel in terms of the possible localities of origin and the possible theological intentions for writing the Gospel, that is, of the results of the current consensus in New Testament scholarship, is given. Bauckham's theory i.t;: then put on the table and evaluated. 1. INTRODUCTORY REMARKS The aim of this paper is to evaluate the most recent article of Richard Bauckham in which he challenges the current consensus in New Testament scholarship that the gospels were written for, and addressed to, specific believing communities. Bauckham's thesis in short is that the gospels were written with the intention to be circulated as widely as possible it was written for every Christian community of the late first century to which the gospels I Dr Ernest van EcI, (MA, DO) participated as research associate in the project "Biblical Theology and Hermeneutics", directed by Prof Dr A G van Aarde. This article is a revised edition of a paper read at the NTSSA, on 29 March 2000 at the Rand Afrikaans University. INTRODUCTORY REMARKS The aim of this paper is to evaluate the most recent article of Richard Bauckham in which he challenges the current consensus in New Testament scholarship that the gospels were written for, and addressed to, specific believing communities.Bauckham's thesis in short is that the gospels were written with the intention to be circulated as widely as possibleit was written for every Christian community of the late first century to which the gospels Is there a kernel of truth in the Papias tradition?The following arguments (internal and external) can be put fo~ard to answer this question positively (see inter alia Vander Broek 1983:10-12;Matera 1987:4-7; Van Eck 1990:2-4;Duling & Perrin 1994: 297-298): • If Mark was chosen by second century Christians only to give the Gospel authority, why did they choose a follower of Paul rather than a disciple of Jesus (like Matthew)? 3 Aside from the. evidence in the tradition that the Gospel was written after the death of Peter, the Gospel's emphasis on suffering and endurance (see ) is usually seen as an indication of the date for Mark.In Rome persecution of the Christians took place under Nero in 64 CE, and thus a date between 64 and 70 is commonly accepted.Also, if Rome is accepted as place of origin, the "desolating sacrilege" in Mark 13:14 might have symbolized Nero. 976 HTS 56(4) 2000 Digitised by the University of Pretoria, Library Services EmestllClllEck There are, however, also certain problems with the Papias tradition: • Although the Gospel gives no infonnation concerning the author, date and provenance, the Patristic witnesses purport to know all three.How trustworthy are, therefore, the Papias tradition, and, for that matter, the Anti-Marcionite Prologue and the writings of Irenaeus, Clement of Alexandria and Origin (see Vander Broek 1983: 12)'r • the description of "Mark" in Papias sounds defensive, and the tradition shows an apologetic tendency (Peter wrote "accurately", "erred in nothing", "not to omit or falsify"~ see Duling & Perrin 1994:298); • Mark, who was not a disciple of Jesus, is connected with a disciple, Peter (Duling & Perrin 1994:298).Moreover, it seems that the connection between Mark and Peter is based primarily on 1 Peter 5:13 (Vander Broek 1983:15); • the Latinisms, the explanation of Jewish customs, the Roman style of reckoning with time, as well as the imprecision in regard to Palestine geography could be explained as being part of the traditions used by the evangelist, or as part of the oral tradition; • suffering and persecution did occur in several places in the Christian church in the 60's and early 70's, not only in Rome.Taking into account the reaction from Yavneh in the early 70's5, and the emergence of fonnative Judaism as a result of the reorganization of Judaism taken up by Yavneh, the persecution of Christians could well have been coming from Jews; • the Greek of the Gospel is unsophisticated, and, though it contains Latinisms, also contains Semitic (Hebrew or Aramaic) language influences.The document is also 4 A comparison of all the Patristic witnesses indicates that Papias is the basic source from which all the others drew.Yet, not all the patristic sources indicate that the Gospel was written in Rome, and a tendency to associate Peter closer and closer with the writing of the Gospel can be detected as the tradition developed (see Vander Broek 1983:12-14;Matera 1987:4-5). 5 Recent studies of the Judaic-Tannaitic writings, which had their origin in the post-70 CE reformed, official Judaism at Yavneh under the leadership of Johanan ben Zakkai and Gamaliel II, suggests that the belief in the divine birth of Jesus, as well as in his resurrection were the fundamental reasons why the Yavneh scribes regarded Jewish Christians as a heretical grouping inside Judaism.This lead to the circulation of anti-Christian pronouncements, the issuing of a prohibition against reading of heretical books (Sifre Minim) and the promulgation of the Birkat ha-Minim (which implied the excommunication from the synagogue; see e g John 9:22; 12:42; Katz 1984:45-47;Overman 1990:38-43). HTS 56(4)2000 Digitised by the University of Pretoria, Library Services A Sitr.for the Gospel of Mark? very accurate with regard to Jewish matters such as housing and taxation, as well as peasant, village and rural agricultural life in Palestine; and • the Papias tradition does not take into account the results of historical criticism which indicate that the gospels developed in a gradual (evolutionistic) way, and that the evangelists made used of specific sources.The findings of, for example, the Formgeschichte, are irreconcilable with the Papias account in regard to the "remembrances of Peter". From the above it is clear that a Roman setting for Mark, as proposed by and derived from the Papias account, depends on both external (the Patristic witness) and internal evidence.Of these two, the Patristic witness is the more basic, and, without this foundation, the internal evidence is not convincing.If the "eyewitness" nature of Mark is disapproved, the "Gospel is not only disassociated from Peter, but also from the Roman setting in which Peter is often bound" (Vander Broek 1983: 17). Hengel (1985), Best (1983), Brandon (1967) and Standaert (1983), however, do support a Rome origin for Mark (by either using or not using the external patristic evidence as described above).These scholars argue that Mark was addressing Gentile Christians in Rome somewhere shortly before or just after the fall of the temple in Jerusalem.Mark 13, Jesus' "small apocalypse" in Mark is taken as cue, since it reveals an atmosphere in which Christian apocalypticism plays a major role and in which persecution has begun, or is about to begin (see Matera 1987:7). Hengel (1985:47-52) argues that the Papias tradition must be seen as authori- Digitised by the University of Pretoria, Library Services The years immediately prior to 69 CE can be described as apocalyptic in character: in 64 CE Nero persecuted the Christians in Rome, Peter and Paul were martyred just before 69 CE, towards the end of the reign of Nero there was reports of famine and unrest, the Jewish war started in 66 CE, after Nero's suicide (in 68 CE) there was a civil war in which three emperors lost their lives (Galba, Qtho and Vitellius), and several earthquakes were experienced in Italy round about 68 CEo All these events, Hengel argues, would have led the Christians in Rome to see their times as the end time.Although the temple in Jerusalem has not yet been destroyed, the author can see that the event is at hand. Hengel thus argues that Mark 13 does not presuppose the catastrophe of 70 CEo Best (1983:142-145) also argues that Mark wrote before the fall of the temple in 70 CEo The reason why Mark wrote was that Mark's community "was in danger of slipping back into 'the easy and self-indulgent life which seemed to be the goal of the Greco-Roman world'" (Best 1983:144).The temptation mentioned in Mark 4:10-20 was real for the members of Mark's community, and the community also feared persecution. As a way of avoiding persecution, the community was concerned with apocalyptic hope. In these circumstances, Mark calls his community to take up the cross.Mark thus acts as a pastor to a community that has lost its original fervor in following Jesus.Brandon (1967:240-266), on the other hand, argues that Mark was written in the aftermath of the Jewish war as an apologia for Roman Christians.By using external and internal evidence, Brandon argues for a date shortly after 71 CE for the writing of the Gospel.According to Brandon Roman Christians would have seen the great procession of Vespasianus and Titus in Rome celebrating Rome's victory over the Jews.In the procession, as described by Josephus (Wars of the Jews, VII, 116-157), the Romans displayed "those ancient purple habits" (the purple hangings of the sanctuary in the temple in Jerusalem, the temple curtain mentioned in Mk 15:38). This visual display of triumph would have affected the Christians in Rome at least in two ways.First, it reminded them that their own faith stemmed from the Jewish people that revolted against Rome, and, second, they realized that the Romans might regard them as also being infected with revolutionary ideas . 10:38) if they think they are able to be baptized with the baptism by which he is to be baptized, according to Standaert, also points to the baptismal imagery of the Gospel,6 2.2 Syria as the setting or Mark's gospel A location which has occasionally been suggested for the origin of Mark, mainly because scholars started to question the Roman proposal, is that of Syria (see Vander Broek 1987:30).Bartlet (1922:34-40) Modern scholars who propose a Syrian setting for Mark do so on a somewhat more secure foundation, in that there is a recognition of the Hellenistic and Palestinian features in the Gospel (Vander Broek 1987:31).Fuller (1966:107), for example, is of the opinion that the language of Mark, as well as the miracle stories and Mark 13, clearly show that the Gospel has a Hellenistic background, a background that suggests an origin in Antioch. Karnetski, in following Marxsen's (1959) Redaktionsgeschichtliche analysis of Mark, sees the "final" Mark as a Galilean redaction of a document that originated in Syria (in spite of the fact that he sees Galilee as the place where the community addressed in Mark is to begin their mission to the Gentiles).KUmmel (1975:98) also opts for a Digitised by the University of Pretoria, Library Services A SiI1./or the Gospel 0/ Mark? "Gentile community in the East".He sees Mark as defending Jesus against the accusation of abandoning the Jewish law and agamst the suspicion of Jewish nationalism.In the Gospel Mark ascribes all human guilt in Jesus' crucifixion to the Jewish leaders (e g Mk 2: 6-8; 3:6; 7:7, 13; 12:13,28; 14:1,55).This apologetic of Mark is intended to make his Gentile readers aware of the riddle of Jewish unbelief and their own grace, an apologetic intent that could only have been understood by a Gentile audience such as in Syria. Kee (1984:245-255) Lohmeyer (1936) argues that early Christianity had two main centers: Galilee and Jerusalem.In Galilee a Son of Man eschatology predominated, and in Jerusalem a nationalistic messianic hope prevailed.Galilee celebrated the breaking of the bread, and 8 Other scholars that also argue for a Syrian provenance of the Gospel whose points of view are not discussed in the above section are Schweizer (1967), Vorster (1980), Waetjen (1989).and Duling & Perrin (1994). 982 HTS 56(4) 2000 Digitised by the University of Pretoria, Library Services Jerusalem the memorial meal.In Galilee Jesus was the Lord, and in Jerusalem he was the expected Messiah.According to Lohmeyer, Mark's gospel has taken up this historical (geographical) difference(s) between Galilee and Jerusalem in the sense that "geography becomes theology" (Lohmeyer 1936: 162).In Mark, Lohmeyer argues, a direct opposition between Galilee and Jerusalem can be detected: Galilee is the center of Jesus' ministry, the sphere of divine activity, while Jerusalem is typified as the center of opposition towards Jesus' ministry, the sphere of hate and misunderstanding.Understood as such, in Mark Jerusalem (the traditional "Gottesstadt") is replaced by Galilee (the new "kommende Gotteshaus")9.Lightfoot (1938: 1-48, 132-159) by God as the seat of the gospel and the revelation of the Son of Man, while ... Jerusalem ... has become the center of relentless hostility and sin.Galilee is the sphere of revelation, Jerusalem the sphere of rejection" (Lightfoot 1938: 124-125)10. Lohmeyer and Lightfoot's study of the opposition between Galilee and Jerusalem in Mark was further developed by Marxsen (1959) in his redaktionsgeschichtliche study 9 In a later work, titled Kultus und Evange/ium, Lohmeyer (1942) described the opposition between Galilee and Jerusalem in tenns of the concepts Evangelium (Galilee) and Kultus (Jerusalem).Jesus' activity in Galilee (e g the forgiving of sins, eating with sinners, disobeying the rules of the Sabbath) was critique aimed at the cult in Jerusalem.Through these activities Jesus postulated a "neue Heiligkeit und neues Heil" (Lohmeyer 1942:106) and dismantled the cult (temple) in Jerusalem. 10 See Malbon (1982:242-255) for a more extensive and apt sununary of the positions Lohmeyer and Lightfoot in regard to the possible opposition between Galilee and Jerusalem in Mark. According to Marxsen.this theological content of the Gospel can be understood against the following historical background: the evangelist is writing some time during the Jewish War (66-70 CE).The threat of violence and war and the destruction of Jerusalem is imminent.and Christians are suspect and persecuted by both the Romans and Zealots.Also, Jewish pretenders tempt the Christians to forsake Jesus.Mark writes his Gospel to admonish those Christians still in Judea (or Jerusalem) to flee to Galilee, the place of Jesus' activity and coming parousia, to justify the existence of the community already in Galilee, and to motivate the community in Galilee to take up their cross and follow Jesus, amidst their situation of suspicion and persecution.II Kelber (1974), by concentrating on an analysis of the "kingdom passages" in Mark (cf inter alia Mk 1:15; 3:31-35; 4:10-34; 8:34-9:1), not only agrees with Lohmeyer. Lightfoot and Marxsen with reference to the opposition between Galilee and Jerusalem in Mark, but also further darkens the negative view of Jerusalem found by these scholars (Malbon 1982:245)12.Kelber's support for a Galilean provenance of the Gospel can be summarized as follows: the Gospel was written as a polemical work of north (Galilee) aimed at the ruined tradition of the south (Jerusalem), formed by Peter and the twelve. The religious leaders in Jerusalem, after Jesus' resurrection, betrayed Jesus" original vision.Self-styled Christian prophets of Jerusalem fell into an eschatological heresy that II For a more extensive summary of Marxsen's view on the Maritan SilZ, see Vander Broek (1987:33-47). the parousia will occur in Jerusalem, and the family and the failed disciples of the Markan Jesus joined the Jerusalem authorities in opposing him. Mark thus writes from the perspective of Galilean Christianity against the Jerusalem Christianity that was current in his day.For Mark the place of the parousia and the kingdom is not in Jerusalem, but in Galilee.Moreover, the time of the parousia is not in The most extensive study on a possible Silz for Mark was done by Vander Broek (1983) in his doctoral dissertation.Vander Broek (1983:302), starting from Marxsen's main arguments, describes the Markan Silz as follows: Mark was produced by a member of a Christian community in Galilee shortly after the Jewish war.Mark's community was "apocalyptically orientated, a stance which it has been forced to define in relationship to the Jewish War (ch.13), and which influences its view of mission (13: 10), miracles Gospels were not addressed to or intended to be understood solely by any specific community such as Rome, Syria or Galilee.Or, to put stronger in his own words: "the enter-IS In Markan scholarship the discussion on a possible date for Mark nonnally centers on Mark 13:2. the prophecy of the destruction of the temple.It is argued that. in postulating a date for Mark, this event must be understood as about to happen or that it just happened.My choice for a post 70 date for the Gospel simply builds on my understanding of Mark 13 as a narrated speech of Jesus (see Vorster 1987:203-222). In no way can I imagine that the historical Jesus "prophesized" (in terms of telling the future) the fall of the temple.Mark.therefore.employs the words of Jesus in Mark 13:2 as "prophecy".that is. a prophecy ex eventu. 16 For a discussion on the house church as the dominant social institution in early Christianity.see Van Eck (1991:667-671). 17 Other scholars who argue for a Galilean setting for Mark.whose arguments are not discussed in full in the above section.are Conzelmann (1967), O'Callaghan (1972).Kealy (1977) and Mack (1995). Digitised by the University of Pretoria, Library Services A Silr.for the Gospel of Mark? Even if one argues that the community in which a gospel was written is likely to have influenced the writing of a gospel, even though it is not addressed by the gospel, it does not follow that we have any chance of reconstructing that community.For Bauckham, therefore, trying to construct a specific community for any of the Gospels have no hermeneutical relevance.Scholars should, therefore, ase using the terms Matthean or Markan community, since they no longer have any useful meaning. How does Bauckham come to the above conclusion?His argument runs as follows: Nearly all scholars writing about the Gospels treat it as self-evident that each evangelist addressed the specific context and concerns of his own community, and a large and increasingly sophisticated edifice of scholarly reconstruction has been erected on this basic assumption.According to Bauckham (1998: 10), however, the Gospels were written for a general Christian audience, not for a specific Christian audience.Mark, therefore, was not written for a specific audience called the Markan community (or Mark's church), but for any and every Christian audience in the late first century to which the Gospel might circulate.Or, in the words of Bauckham (1998:11) Digitised by the University of Pretoria, Library Services Em~st vall Eek respective gospels to replace that of Mark.To suppose that Matthew and Luke, after Mark's gospel received such a wide audience, addressed their own gospels to a much more restricted audience such as their own communities, therefore, seems to Bauckham prima facie improbable. According to Bauckham (1998:13), the current dominating view that each evangelist wrote for his own community should be seen as a result of British scholarship starting at the end of the nineteenth century.[1967], Weeden [1968], Reploh [1969], Evans [1970], Kelber [1974] and Pesch [1977]) an approach started to develop that aimed to reconstruct the distinctive features of the Markan community and to explain the Gospel as addressing specific issues within the community.Nowhere, however, Bauckham argues, arguments are put forward to substantiate this kind of working hypothesis, rather, it is treated as a self-evident fact that each Gospel addresses the specific circumstances of a particular community. According to Bauckham (1998:19-22) all these attempts (since the late 1960s ) that take seriously the claim that each Gospel addresses the specific situation of a particular community have three main characteristics in common.One is the develop- Digitised by the University of Pretoria, Library Services A Sitt/or the Gospel 0/ Marie?ment of allegorical readings of the gospels in the service of reconstructing not only the character but also the history of the community behind the gospels.In other words, characters and events referred to in the gospels are taken to represent groups and events (experiences) within the community.This method, according to Bauckham (1998:20), leads to "historical fantasy".The second characteristic of scholars that claim that each gospel addresses the specific situation of a particular community is the increasing use of sophisticated social-scientifJ5= methods for reconstructing the respective communities behind each gospel.These methods, however, have taken over the same false assumption that in each gospel there indeed exists a relationship between a single context in which it was written and for which it was written. 18The third aspect of the reading strategy adopted by the current consensus is that the so-called implied relationship between text and context leads them to understand features of the text that need not to be understood so at all.A study of the social status of the characters in Mark, for example, does not automatically mean that the social status of the implied audience are the same. In the latter part of his article, Bauckham (1998:26-44) puts forward his arguments to substantiate his thesis that the gospels were written for any and every Christian audience in the late first century to which the Gospel might circulate.The first stage of his arguments consists in contrasting the gospels with the Pauline letters.Treating the contrast between the Pauline letters and the gospels as first stage of the argument is for Bauckham important, since he is of the conviction that scholars of the consensus see the audiences of the gospels just as local and particularized as the major Pauline letters that address the specific needs and problems of each congregation that the letters are addressed to.According to Bauckham (1998:27-28), the gospels are not letters, and to appreciate the difference between the two, the following two considerations have to be taken into account: first, the crucial difference of genre, and second, the question of why someone would want to put something down in writing. 18 At this point of his argument Bauckham (1998:22-25) asks the question if one must not suppose that the assumption that each gospel was written for the evangelist's own community has not indeed been confirmed by the results of this kind of reading of the gospels.He answers this question negatively. in that he argues that current scholarship does not proceed by arguing that certain features of a gospel are explicable only if understood as addressed to a specific audience rather than to a general audience.The results of current scholarship, therefore."are results of applying to the text a specific reading strategy.not of showing that this reading strategy does better justice to the text than another reading strategy" (Bauckham 1998:22). 990 HTS 56(4) 2000 Digitised by the University of Pretoria, Library Services In regard to the question of genre, Bauckham argues that the special quality of a letter is that it is always read as, for example, the letter addressed to the Corinthians. This, however, is not the case with the gospels.From the mid-second century up to the twentieth century no reader supposed that the specific situation of the Matthean community was relevant to the reading of the gospel.Moreover, since the specific genre of the gospels is what can be called the Greco-Roman bios, no one would expect a bios to address the very specific circumstances of a small community of people. The full force of the difference in genre between the gospels and the letters of Paul, however, is only understood when we add to this difference the question: why would anyone write a letter or a gospel?In the case of the letters of Paul the answer is evident: distance required writing.On the other hand, Bauckham argues, orality sufficed for presence.Why would someone, if orality.sufficed for presence, go to the considerable trouble of writing a gospel for a community to which he was regularly preaching, or go to the trouble to freeze in writing his response to a local situation?This trouble, Bauckham argues, would only have been undertaken if there was a need to communicate widely with readers unable to be present at its authors' oral teaching. The rest of Bauckham's arguments for the likelihood that the gospels were written for general circulation are based on his understanding of the early Christian movement (see Bauckham 1998:30-44).According to Bauckham, the early Christian movement did not consist of scattered, isolated and self-sufficient communities with little or no communication between them, but rather of a network of communities with constant and close communication among themselves."In other words, the social• character of early Christianity was such that the idea of writing a Gospel purely for one's own community is unlikely to have occurred to anyone" (Bauckham 1998:30). Moreover, evidence show that mobility and communication in the first-century Roman world were exceptionally high.Good roads and safe travel by both land and sea made the Mediterranean world a very closely interconnected area.In this kind of situation people of the early Christian churches must have traveled very regularly, and so communication was built up and kept up, and contact between churches scattered across A Sitr./or the Gospel 0/ Mark? the empire was at the order of the day.19Gospels written for general circulation fit this scenario. Evidence further shows that the early Christian movement had a strong sense of itself as a worldwide movement.Christians saw themselves as brothers and sisters of all believers over the world, and if it were the case that some of the believing communities indeed experienced themselves as a small minority group experiencing alienation and opposition in its immediate socijil context, this solidarity with fellow-believers compensated for their own situation. A further aspect of early Christianity that Bauckham (1998:33) According to Bauckham (1998:36), these traveling leaders should be seen as models for the kind of persons who might have written the gospels.Because their own experience of the Christian movement was all but parochial, the writers of the gospels did not confine their attention, when composing their respective gospels, to the local needs of a single commun!ty, but rather wrote relevantly for a wide variety of churches in which their gospels might be read (Bauckham 1998:38). Another feature of early Christianity that Bauckham (1998:38) sees as support for his thesis is the sending of letters from one church to another (e g, 1 Peter, 1 Clement, 19 In this regard Bauckham finds support for his thesis in the work of Thompson (1998:49-69) who argues that churches form 30 tot 70 CE had the motivation and means to communicate regularly and in depth with one another.Thompson is of the opinion that many of the early congregations were less than a week's travel away from a main hub in the Christian network.If Mark's gospel, for example, was indeed written in Rome (with the endorsement of Peter's preaching), it could not have taken long for this gospel to spread right through the Mediterranean world.He also argues that Acts and the epistles of Paul preserved good evidence that the early Christian communities had strong reasons for staying in close connection with each other.Concluding from this.Thompson (1998:69) argues that it was more likely that the Gospels were written not over a period of decades (i e, from 70 to 100 CE), but within a few years of each other.Bauckham (1998:44-48) concludes with the following hermeneutic observations: The attempt by the current consensus in gospel scholarship to give the so-called Matthean or Markan communities a key hermeneutical role in the interpretation of the gospels have no hermeneutical relevance. In regard to the implied audience of the Gospels, what Bauckham proposes do not only implies a gospel audience broader than the current consensus allows (e g, a range of churches over a specific geographical area), but an audience that is indefinite rather than specific.The audience of the gospels were any and every church to whom a specific gospel may circulate.The intended audience of each gospel was an open category. 21 Bauckham finds support for this argument in the work of Alexander (1998:71-109).Alexander, in following Gamble (1995), is of the opinion that abundant material evidence of book production among early Christians can be put forward, although it dates from the second century and beyond.The sheer volume of early Christian papyri from the second century onwards testifies to this fact.Although this evidence comes from the second century and onwards, she argues that the sheer volume of written material indicates the early popularity and widespread use of written material.From this evidence she concludes that written material must have been fairly widely used during the period of the writing and disseminating of the Gospels. Digitised by the University of Pretoria, Library Services A Sik./or the Gospel 0/ Marie? The crux of Bauckham's argument does not imply that the gospels should be seen as autonomous literary works floating free of any historical context.The gospels have a historical context, that of the early Christian movement in the late first century. Bauckham's argument does not mean that the diversity of the gospels are underestimated.It simply denies what the consensus assumes, that is, that the diversity of the gospels require a diversity of readers. The desire to define the histo!ical meaning of a text as specifically as possible, by defining its historical context as closely as possible, is a hermeneutical mistake.The mistake, however, does not consist in thinking that the historical context is relevant, but lies in failing to see that texts vary in the extent to which they are context relevant.The New Testament scholarship, however, has indicated that the early Christian movement was all but one close-knit family.There were many different responses to the teachings of Jesus of Nazareth, responses that went different ways depending on their mix of peoples, social histories, discussions about the teachings of Jesus, and how they were to be interpreted and applied (Mack 1995:6).Mack (1995:6) describes the different groups that formed around the teachings of Jesus as follows: some were called Jesus HTS 56( 4)2000 Digitised by the University of Pretoria, Library Services movements, others became congregations of the Christ whose death was imagined as a martyrdom to justify a mixture of Jews an Gentiles into a new community (being "in" Christ, and therefore brothers and sisters of each other), and others developed into enclaves for the cultivation of spiritual enlightenment or the knowledge (gnosis) Jesus taught. 22 Mack (1988:82-97) identifies five Jesus movements that existed in southern Syria and Palestine fonn 30 CE to 70 CE: itinerants in Galilee, the so-called Q-community (based on the wisdom teachings of Jesus they adopted a social-critical way of life as alternative to the social dispensation they lived in; apocalyptic ism, as well as the death of Jesus, played no part in their understanding of Jesus; see Mack 1988:84-86), the pillars in Jerusalem (Jesus was understood as the authority in tenns of interpreting the law, circumcision was practiced, purity rules were adhered-to, and by means of table fellowship boundaries between Jewish Christians and Gentile Christians were drawn; see Mack 1988:88-90), the family of Jesus (a Hasidic kind of movement, like the pillars in Jerusalem, that adhered to the purity rules and emphasized the correct interpretation of the law, but also laid claim to be in close association with the family of Jesus; see Mack 1988:90-91), the congregation of Israel (a group that practiced table fellowship where at the miracle stories of Jesus were interpreted in tenns of the Moses-, Elijah-and Elisha traditions; see Mack 1988:91-93), and the synagogue reform (a group that stood in conflict with the local synagogues in the Hellenistic cities in Galilee and southern Syria, who, during table fellowship, discussed Jesus' chreiai, discussion that evolved into the pronouncement stories as we now have it in Mark; see Mack 1988:94-96). In regard to the congregations of the Christ, differences in opinion can also be detected: some cults gave no attention to the resurrection of Jesus, only reflecting on the death of Jesus and its supposed meaning for their own situation, others combined the Hellenistic myth of Jesus' death (as being a martyr) with the Jewish myth on the resur-22 Crossan (1999:239-574) is also of the opinion that the early Christian movement was a movement of diversity.Crossan's (1999:xxxiv) thesis is that in earliest Christianity at least two excluding and competing traditions existed, mimetic Christianity (the Life Tradition) and exegetic Christianity (the Death Tradition).The former put emphasis on the sayings of Jesus and on living in the kingdom of God, was centered in Galilee and went out from Galilee.The latter emphasized the resurrection of Jesus and his expected return, was centered in Jerusalem and went Out from Jerusalem.Both these traditions claimed exclusive continuity with the past. see Cameron 1982) it is clear that from as early as 30 CE up to the late second century different points of view on who Jesus was, and what fellowship with Jesus implied, existed.Bauckham's theory on the early Christian movement does not take these differences into account, and should therefore be refined. But what about the communities that are represented by the New Testament? Were they not the large or main body of the early Christian movement that Bauckham's universal gospels were addressed to?A point can be made, in this regard, maybe for the synoptic Gospels, but what about the gospel of John with its clear gnostic background? Moreover, it must be remembered that the texts of the New Testament were collected in the interest of a particular form of Christianity2J that emerged only by degree through the second and fourth centuries, and consists of a very small selection of texts from a large body of literature produced by various communities during the first hundred years after the death of Jesus. If the above description of the early Christian movement is correct, the question should be asked if it is at all possible to understand the early Christian communities (all consisting of a different mix of peoples, with different social histories, and with different discussions about the teachings of Jesus and how they should be interpreted and applied) as an irenic network in constant communication with each other, a network around which Christian literature circulated easily. A second aspect of Bauckham's thesis that needs our attention is the matter of genre.According to Bauckham (1998:28) the gospels must be seen as a special category of the Greco-Roman bios, and, although the implied readership of the ancient biography is a topic that still needs discussion, it seems unlikely that anyone would expect a bios to address the very specific circumstances of a small community of people.Bauckham's argument in this regard is clear: since the gospels are a special category of the Greco-Roman bios, its addressees could not have been a specific local community.Bauckham, however, does not put forward any convincing arguments for making the connection between a bios and a universal audience.To simply connect a specific genre to a specific 23 Mack (1995:6) refers to the type of Christianity as represented by the New Testament as "centrist", in that it positioned itself against gnostic fonns of Christianity on the one hand, and radical fonns of Pauline and spiritist communities on the other. HTS 56(4) 2000 Digitised by the University of Pretoria, Library Services A SiIz/or the Gospel 0/ Marie?kind of audience is not enough, since Bauckham's argument could just as well have been that, since the gospels are not of a special category of the Greco-Roman bios, it's implied audience must have been the specific circumstances of a small community of people 24 • In discussing the "full force" of the difference of genre, Bauckham (1998:28) goes even further.His argument, if I understand it correctly, is as follows: an apostolic letter and a gospel are not of the same genre.In the case of a letter, it is put down in writing (e g, 1 Corinthians), because Paul could not visit the congregation.Writing, therefore, is needed when distance required communication.Paul, for example, would not have written 1 Corinthians if he resided permanently in Corinth.Therefore, Bauckham concludes, distance required writing, whereas orality sufficed for presence.Which brings us to Bauckham's theory in regard to the gospels: since orality sufficed for presence, the writers of the gospels taught/preached to the communities they lived in orally, thus, like Paul, would not have written a gospel to the community, for example, of Edessa if they resided in Edessa.However, since it was not possible, for example, to travel to the Christians in Sepphoris in Galilee, a gospel would be written to be passed on to them and any other local community that was interested. From the above Bauckham's line of argument is clear: letters were written, because of distance, for a specific audience that could not be visited.The gospels (which are not letters), in tum were written, because of distance, for a universal audience (not a specific audience, since they are not letters).Or, to put it differently: if the gospels were addressed to specific communities, they would have been letters, not gospels.Thus, for Bauckham, only two possibilities exist: if someone needed to communicate over a distance with a specific identified audience, a letter was written, and if someone wanted to communicate over a distance with as many people as possible, a gospel was written. 24 How difficult it is to connect the gospel genre to an earlier antecedent such as the Greco-Roman bios can be seen when one looks at the arguments of, for example, Bultmann and Talbert in this regard.Bultmann (in Vorster 1981:15) argued that the gospels do not fit the bios as genre, since a) the gospels are mythical in character while the Greco-Roman bios do not make use of myths and mythical language; b) the gospels can be seen as cult legends, while the bios is not, and c) the gospels are, because of the eschatological orientation, world negating, while the Greco-Roman bios has no eschatological tendencies at all.Talbert (in Vorster 1981:16), in reaction to Bultmann, uses the same arguments to prove that the gospels indeed are 'biographies'.It should also be noted that the bios is not the only genre that has been proposed as antecedent for the gospels.Other possibilities that have been proposed are the aretology, tragedy, and comic tragedy (i e Hellenistic parallels) and Exodus, the biography of a righteous man and an apocalyptic drama (i e, Semitic parallels; see Vorster 1981:14-25). 998 HTS 56(4) 2000 Digitised by the University of Pretoria, Library Services Are there not, however, a few other possibilities also?Was it impossible, for example, that a teacher of a specific community wrote down his gospel for his com- were written by specific authors for a universal audience, and a specific community became attached to one specific gospel since it gave expression to its own internal problems ar.d situation?And because of this, this specific community shaped (amended or altered) the original text at specific points to fit their own needs even better?Would it not then serve the text better if the text is read or interpreted against that (or a) specific background?Also, is it not possible that some Gospels were written for individual communities, while others (e g, Luke with its "universal" message) were indeed produced to be first and foremost read by an universal audience? Moreover, is it not the case that, after a text has been produced by a specific writer, the writer has no control over "his" text any more?In this regard Bauckharn, in my opinion, works with a modem assumption of the way texts are produced: they are written, printed and published, and then distributed for consumption.Was such a situation possible in the first century?In other words, is it not also possible that, the gospels we do have differ, for a number of reasons described above (if they were indeed composed by individual authors), quite a bit from their "original" form?Or do we.have to believe that what we have in the New Testament known to us as gospels are the "final" texts as produced by individual authors for a universal audience?One final example: say, for example, some followers of Jesus traveled around in a Cynic-like fashion and told HTS 56(4) 2000 Digitised by the University of Pretoria, Library Services the(ir) story of Jesus to whomever wanted to listen, and their specific story/interpretation of Jesus became so popular that some leaders of congregations visited started to use the Jesus-story heard as basis for their preaching in their specific communities, either taking it over as heard, or by adapting it? The point I want to make does not lie in the amount of possibilities that can be accumulated, but in the simple question: if any of the possibilities described above indeed are possible, would the gospels have looked different as we have them now?I do not think so.For Bauckham orality suffices where no distance is involved, but writing comes into play when distance is involved.What if oral\ty did suffice when the gospel of Mark, for ex.ample, was first nothing but a sermon to Mark's own congregation, and, when the need arose for this specific story of Jesus (i e Mark) to be heard elsewhere, it was written down to be circulated? In this regard, two other arguments of Bauckham (1998:32-33) need our attention.To substantiate his thesis that the gospels were written for a universal audience, Bauckham argues that the early Christian movement was a network of communities with close and constant communication between them (Bauckham 1998:30), that mobility was very high and that traveling between the different communities was easy and safe (Bauckham 1998:32), and that the leaders of the Christian movement moved around quite often (Bauckham 1998:33-34).If this indeed was the case, the question should be asked: if the leaders of the Christian communities did travel a lot (since it was so safe and easy), and was received in a quite welcome fashion in each community they visited (since the Christian movement was so close-knit), why did orality not suffice in such a situation? Was it then really necessary to put the gospel down in writing?Or are we to suppose that only four leaders of the early Christian community were not able to travel as much and where they wanted to, and therefore put down their gospels in writing? Above all, Bauckham's theory on the mobility and the frequent moving around of Christian leaders in the early Christian movement is based on highly suspectable evidence.Bauckham (1998:33-34), in substantiating his thesis that the early Christian Another question that can be asked with regard to Bauckham's "universal thesis" is the matter of the dating of the gospels.According to Bauckham (1998:46), the gospels were written some time in the late first century.Why does Bauckham not want to be more specific?Is it not that a more specific choice wil undermine his thesis?A more specific choice for a date of composition of Mark, for example, would mean that certain events (like the fall of the temple, the increasing activity of formative Judaism, the Caesar cult as practiced and advocated by, e g, Domitianus), that can be connected with specific communities in early Christianity, would come into play in understanding the content of the gospels.This would, of course, mean that the experiences of local communities would likely have to become part of gospel interpretation, exactly what Bauckham wants to avoid.The obvious in this regard, however, cannot be denied: a date of composition for the gospels must exist, since the gospels exist.If one wants to argue that the date of composition of the gospels has no hermeneutical relevance in understanding the gospels, it is not enough to simply practice an argument of silence.If Bauckham indeed is of the opinion that the date of the composition of a gospel has no importance in interpreting that specific gospel, it must be argued why this is the case. The diversity of the gospels also needs our attention.According to Bauckham (1998:47-48), the diversity of the gospels does not weaken his thesis in regard to the universality of the gospels' addressees.His thesis, Bauckham argues, simply denies what the consensus assumes: that the diversity of the gospels requires a diversity of readers. The distinctive nature of John, for example, does not imply that its intended readers were a highly distinctive branch of early Christianity, different from the readership of other gospels.It implies only that its author(s) wished to propagate his own distinctive theological rendering of the gospel story among whatever readers it might reach.This argument of Bauckham is, of course, based on two other arguments that form part of his thesis that have been already questioned above: that the early Christian movement was an open, conflict free and irenic movement, and that the audiences of the gQspels were indefinite rather than specific.Three observations can be made in this regard.First, the early Christian movement, as indicated above, was all but conflict free and irenic.Second, Bauckham works with a modem Western view of the way texts are produced, printed and circulated.Third, albeit that the gospels were written for indefinite audiences, audiences indeed were specific, in that if they were not, no audiences actually existed.Would, for example, an audience that was negative with regard to a gnostic interpretation of the message of Jesus, receive a gospel like John or Thomas at all? The fact of the matter is that the gospels are diverse in terms of their content. Why this difference?Did the situation of, for example, the Markan community play no role whatsoever in the fact that Mark contains a (possible) corrective Christology (Weeden 1978), and a possible messianic secret as theological construct (Wrede 1971)? Why does Matthew make more of the correct interpretation of the law than all the other gospels, and why does John take up some elements of gnosticism in his gospel?Because, according to Bauckham, the evangelists had different understandings of Jesus and his story?Or, because certain movements soon adopted specific understandings (based on or relating to their specific circumstances, social background, mixes of people and influences from their context) of Jesus?To pretend that only the latter is possible, cannot honestly be argued.But then, to argue that only the first is possible, that is, that the circumstances of a specific community played no role whatsoever in the content of the different gospels and that the gospels are solely the product of what went on in the minds of the evangelists, can also not be accepted as unbiased. Again it is my opinion that Bauckham does not make enough of the first century Christian movement as first and foremost oral in character.Written texts were the exception, not the rule.If this was the case in the first century, and the gospels were indeed produced by individual authors for a general and universal audience, we shall have to rethink the date of the composition of the respective gospels most seriously, since they could only then have been written much later.Bauckham will then have to amend his thesis in the sense that the gospels were written for a general Christian audience from the middle of the second century, when the writing of texts started to become more part of the order of the day (e g, the writings of the apostolic Fathers).Is it, however, possible 1002 HTS 56(4)2000 Digitised by the University of Pretoria, Library Services • his understanding of the process of writing and/or creation of the gospels is reductionistic; • Bauckham has not proved in a convincing manner that the genre of the gospels can be categorized as a special category of the Greco-Roman bios; • Bauckham's view that the different audiences/communities in early Christianity, as well as their specific circumstances, played no part in both the writing or content of the gospels, can be questioned; • Bauckham works with a modem, Western-like model in his postulation of how the gospels were created, a model that does not take into account the full implications of the early Christian movement as an oral culture; • the diversity of the gospels is not appreciated enough; and • Bauckham' s thesis lacks an attempt to date the different gospels, an enterprise which, if undertaken, could make his thesis vulnerable at some points. In short, Bauckham' s main thesis, and his arguments by which it is substantiated, are not convincing enough. By this it is not meant that the "old consensus" is right and that Bauckham is wrong.What is meant is that the old consensus, which assumes that each of the gospels was written to address the specific problems, circumstances and questions of a specific believing community, still seems to be a better hermeneutical tool to understand, not only Dr Ernest van EcI, (MA, DO) participated as research associate in the project "Biblical Theology and Hermeneutics", directed by Prof Dr A G van Aarde.This article is a revised edition of a paper read at the NTSSA, on 29 March 2000 at the Rand Afrikaans University.literature beginning in the late first century, since both Rome (70 CE) and Babylon (587 BCE) are remembered for destroying Jerusalem.Beginning with the Papias tradition, the external and internal evidence in regard to the origin of Mark is thus clear: the second Gospel was written by Mark (as interpreter of Peter) in Rome (after the death of Peter and Paul»). applied Lohmeyer's thesis to the problem of understanding the conclusion of Mark's gospel.Using the Formgeschichte as historicalcritical tool, Lightfoot argues that on the basis of form and content Mark was intended to end at Mark 16:8.The significance of this ending is, however, made most clear by the theological opposition of Galilee and Jerusalem throughout the Gospel, as indicated by Lohmeyer.The first nine chapters of the Gospel (where Jesus operates in Galilee) and the last part of the Gospel (where Jesus is on his way to Jerusalem and operates in Jerusalem) show a remarkable difference: in the first nine chapters of the Gospel Jesus often calls for repentance, he calls for secrecy about his true identity, and exorcisms are at the order of the day.In contrast, in the last part of the Gospel there is no invitation to repentance, no charge to secrecy, and no exorcisms are carried out."Galilee and Jerusalem therefore stand in opposition to each other ... Galilee is shown to have been chosen A Sitz/or the Gospel 0/ Mark? of Mark.Marxsen supports the main arguments of Lohmeyer and Lightfoot.but criticizes both for overlooking the importance of distinguishing between tradition and redaction in Mark.Marxsen argues that the redaction in Mark is found in the framework of the Gospel.Although most of the references to place in the Gospel are already anchored in the tradition.the evangelist inserts Galilee as the place of Jesus' activity in all his redactional remarks (see Mk 1 Jesus' generation, but in Mark's own time.13In his Gospel Mark explains the extinction of the Jerusalem church and the abolition of Jewish legalism to vindicate the Gentile mission and to emphasize the way of the cross.Other Markan scholars that support Marxsen's thesis,14 albeit only indirectly, are Schoep (1960:1-60),Parker (1970:60) andTrocme (1975).Schoep (1960: 103-105) argues that, based on his analysis of Mark 13, the Markan community fled Jerusalem during the Jewish war to await the parousia in Galilee.Parker (1970:295-304), by comparing the communities in Mark and Acts, comes to the conclusion that Mark is representative of Galilean and Acts as that of Judean Christianity.Trocme (1975:48-59), in his tum, argues that an earlier form of Mark (including chapters 1 to 13) was produced in northern Palestine by Hellenistic Jews in the early fifties.In this "earlier Mark" Trocme finds a polemic against Jewish Christianity, based upon Mark's opposition to the temple (Mk 11; 13: 1-2), the implicit criticism of Jewish Christians in Jesus' attacks on the scribes and the Pharisees, and the criticism of the leadership in Jerusalem (e g, Peter [Mk 8:27-9:-1]; James]. himself: «[The evangelists wrote their Gospels] expected [their works] to circulate widely among the churches, had no particular Christian audience in view, but envisaged as [their] audience[s] any church (or in any church in which Greek was understood) to which [their] work[s] might find [their] way."As point of departure,Bauckham (1998:12) assumes Markan priority.This means, forBauckham (1998:12), that by the time Matthew and Luke wrote their respective gospels, the Gospel of Mark had already circulated quite widely around the churches and was being read in the churches to which Matthew and Luke respectively belonged.Thus, whatever Mark intended his Gospel to be, his work, as Matthew and Luke knew it, had already come to be used and valued, not as a work focused on the particular circumstances of Mark's community, but as a work generally useful to various different churches.Matthew and Luke, therefore, must have expected that their Gospels would also circulate at least as widely as Mark's had already done.They must have envisaged an audience at least as broad as Mark's gospel had already achieved, even expecting their 988 HTS56(4) 2000 20 See Bauckham (1998:34, footnotes 41 to 51) for the places these leaders traveled to, as well as the Biblical references he quotes to support his argument.992HTS56(4) 2000Digitised by the University of Pretoria, Library ServicesErnest JlQft EelPolycarp of Smyrna to Philippi and the six letters of Ignatius of Antioch).These letters established more than literary connections between churches.Letters imply messengers, and messengers imply personal contact, the conveying of news, sharing in worship, and taking back home news from the congregation that has been visited.Finally, Bauckham (1998:39-40) is of the opinion that we have concrete evidence for close contacts between churches in the period around or soon after the writing of the gospels.He cites the following three examples: the famous fragment of Papias' prologue to his lost work, the letters of Ignatius and the Shepherd of Hermas.From these letters, Bauckham (1998:41-44) argues, can be deducted that there was an active communication network between the different Christian communities in early Christianity.21To summarize: for Bauckham (1998:44) the early Christian movement was a "network of communities in constant communic~tion with each other, by messengers, letters and movements of leaders and teachers -moreover, a network around which Christian literature circulated easily, quickly~ and widely."The idea of writing a gospel only for local consumption does not fit this picture. gospels, for example, are relatively "open texts" that leave their implied readership more open and consequently leave their meaning more open to their real readers' participation in producing meaning.4. SOME CRITICAL REFLECTIONS ON BAUCKHAM'S THEO-RY ON THE UNIVERSAL AUDIENCE OF THE GOSPELS From the above it is clear that Bauckham's theory on the universality of the gospels goes hand in hand with his understanding (postulation) of the early Christian movement: a network of communities in constant communication with each other, by messengers, letters and lJlOvements of leaders and teachers, a network around which Christian literature circulated easily, quickly, and widely.In short, an irenic movement with no inner conflict, a movement (consisting of close-knitted communities) in which individual communities are open for influences from any other individual community(ies).To put it differently: the early Christian movement was one table, one shared meal, a movement practicing open commensality wider as one can imagine. leaders moved around quite often, quote in almost all cases tex.ts from the Acts of the Apostles to give a base for his argument.It is, however, more or less consensus in New Testament scholarship that Acts is highly tendentious in character, in that Acts (as part of 1000 HTS 56(4)2000 Digitised by the University of Pretoria, Library Services Luke-Acts) wants to give an irenic picture of the way the gospel of Jesus the Christ spread, first from Galilee to Jerusalem (Luke), and then from Jerusalem to Rome (Acts).This tendentious aspect of Acts becomes clear when, for example, a comparison is made between Acts 15 and Galatians 2.Moreover, from the letters of Paul (see 2 Cor 6:4-6; 11:23-28) a different picture of what travel in the first century entailed, can be drawn. Ernest "an Eel to read Mark against the background of the middle second century?Also, what about the different termini ad quem that can be argued for the different gospels which are earlier than the middle of the second century?5.CONCLUSIONBauckham's thesis on the addressees of the gospels can be questioned on grounds of the following arguments:• Bauckham's view of the early Christian movement in the latter half of the first century as being a close-knit, conflict free and irenic network of communities in constant communication with each other can be questioned on different grounds; To his knowledge,Swete (1909)was first to advance a Roman provenance for Mark, shortly followed by Plummer (who proposed that Luke was written for a specific Gentile audience) and the work of Streeter (in which he proposes that the four Gospel each originated in one of the four major centers of early Christianity, i e, Antioch, Rome, Caesarea and Ephesus).According to Bauckham (1998:15-16), the next impetus in the process by which the consensus came about that each evangelist wrote for his own specific community, came fromKilpatrick (1946; see Bauckham 1998: 15), in that Kilpatrick takes it for granted that the community in which Matthew wrote was the same as the community for whom he wrote.Kilpatrick's book is also the direct ancestor of the way recent commentaries on the Gospels (e g, Davies & Allison 1988 on Matthew, seeBauckham 1998:16) and Fitzmyer (1981 on Luke; seeBauckham 1998:16)discuss the question about the context in and audience for whom these respective evangelists wrote their gospels.In these works, however,Bauckham (1998:16)finds no arguments for the view that has become consensus: that each Gospel was written for or aimed at a specific community.The same can be said about Markan scholarship: in the late 1960s and 1970s in a series of books (e g that of Conzelmann munity, and that his written gospel then (like the letters of Paul) started to circulate in the early Christian movement to be used by more than one believing community?Was it not also possible that someone in a specific believing community could have written down what he heard, and that what was written down (a gospel) started to circulate as widely as Bauckham has suggested?There are still more possibilities available to us in this regard: what if the leader of a specific community decided that, since his community knows the traditions he is making use of to tell his story about Jesus, decided to put in writing his version of the story in terms of traditions available, so that his community has a specific interpretation of the traditions known to them?What if the results of the Fonngeschichte (the work of Schmidt, Bultmann and Dibelius) are taken seriously in that the gospels are Kleinliteratur, the creation of believing communities itself?What if the gospels indeed
13,581.4
2000-12-14T00:00:00.000
[ "Philosophy" ]
Coupled Inositide Phosphorylation and Phospholipase D Activation Initiates Clathrin-coat Assembly on Lysosomes* Adaptors appear to control clathrin-coat assembly by determining the site of lattice polymerization but the nucleating events that target soluble adaptors to an appropriate membrane are poorly understood. Using an in vitro model system that allows AP-2-containing clathrin coats to assemble on lysosomes, we show that adaptor recruitment and coat initiation requires phosphatidylinositol 4,5-bisphosphate (PtdIns(4,5)P2) synthesis. PtdIns(4,5)P2 is generated on lysosomes by the sequential action of a lysosome-associated type II phosphatidylinositol 4-kinase and a soluble type I phosphatidylinositol 4-phosphate 5-kinase. Phosphatidic acid, which potently stimulates type I phosphatidylinositol 4-phosphate 5-kinase activity, is generated on the bilayer by a phospholipase D1-like enzyme located on the lysosomal surface. Quenching phosphatidic acid function with primary alcohols prevents the synthesis of PtdIns(4,5)P2 and blocks coat assembly. Generating phosphatidic acid directly on lysosomes with exogenous bacterial phospholipase D in the absence of ATP still drives adaptor recruitment and limited coat assembly, indicating that PtdIns(4,5)P2functions, at least in part, to activate the PtdIns(4,5)P2-dependent phospholipase D1. These results provide the first direct evidence for the involvement of anionic phospholipids in clathrin-coat assembly on membranes and define the enzymes responsible for the production of these important lipid mediators. Adaptors appear to control clathrin-coat assembly by determining the site of lattice polymerization but the nucleating events that target soluble adaptors to an appropriate membrane are poorly understood. Using an in vitro model system that allows AP-2-containing clathrin coats to assemble on lysosomes, we show that adaptor recruitment and coat initiation requires phosphatidylinositol 4,5-bisphosphate (PtdIns(4,5)P 2 ) synthesis. PtdIns(4,5)P 2 is generated on lysosomes by the sequential action of a lysosome-associated type II phosphatidylinositol 4-kinase and a soluble type I phosphatidylinositol 4-phosphate 5-kinase. Phosphatidic acid, which potently stimulates type I phosphatidylinositol 4-phosphate 5-kinase activity, is generated on the bilayer by a phospholipase D1-like enzyme located on the lysosomal surface. Quenching phosphatidic acid function with primary alcohols prevents the synthesis of PtdIns(4,5)P 2 and blocks coat assembly. Generating phosphatidic acid directly on lysosomes with exogenous bacterial phospholipase D in the absence of ATP still drives adaptor recruitment and limited coat assembly, indicating that PtdIns(4,5)P 2 functions, at least in part, to activate the PtdIns(4,5)P 2 -dependent phospholipase D1. These results provide the first direct evidence for the involvement of anionic phospholipids in clathrin-coat assembly on membranes and define the enzymes responsible for the production of these important lipid mediators. An area of membrane that will give rise to a clathrin-coated bud is demarcated by the placement of adaptors at that site. This necessitates that adaptor recruitment onto membranes be controlled with good precision. The first glimpse of the real complexity of the adaptor recruitment process came from studies using the fungal metabolite brefeldin A (1,2). When added to cells, this compound causes an extremely rapid disappearance of AP-1 adaptors and clathrin, from the trans-Golgi network (TGN) 1 (1,2), the site where this heterotetrameric adap-tor complex is usually massed to initiate the formation of clathrin-coated buds (3). The effect of brefeldin A led to the demonstration that the binding of AP-1 to the TGN is regulated by ADP-ribosylation factor (ARF) in a cycle of GTP binding and hydrolysis (4,5). We proposed a model in which an AP-1-specific, ARF-activated docking site initiates clathrin-coat assembly at the TGN (5). Bound to an ARF⅐GTP-activated docking site, AP-1 would begin to recruit clathrin (6,7). Sustained recruitment of both AP-1 and clathrin would result in the formation of an extensive polyhedral lattice. The association between AP-1 and the presumptive docking molecule is terminated on hydrolysis of ARF-GTP to GDP (8). We envisioned that the laterally expanding lattice would be tethered to the underlying membrane by AP-1-docking protein associations, primarily occurring at the perimeter of the growing coat. The density of AP-1 within the developing bud would be sufficiently high so that even lowaffinity interactions between protein-sorting signals and the 1 subunit of the adaptor heterotetramer (9,10) would result in preferential retention of select transmembrane proteins within the structure. If a sorting signal on a protein to be included within the clathrin coat disengaged from one 1 subunit, it would immediately encounter another AP-1 molecule so the mobility of sorted proteins would be severely restricted over the period in which the bud grows. Brefeldin A subverts coat assembly at the TGN by interfering with earliest known step of the process, catalyzed nucleotide exchange (11,12), blocking ARF⅐GTP delivery onto the membrane and all downstream events. Unlike the clathrin-coated structures on the TGN, clathrinmediated endocytosis is not perturbed by brefeldin A (1,2). This clearly establishes that the recruitment of AP-1 and AP-2 adaptors is regulated differently but, currently, very little is known about the nucleating events that precede AP-2 translocation onto membranes (9,13). Overexpression of several plasma-membrane receptors normally internalized in clathrincoated vesicles does not alter the steady-state distribution of AP-2 (14 -16), arguing that unregulated association between protein-sorting signals and AP-2 adaptors is unlikely to begin the clathrin-coat assembly process. Synaptotagmin, a calciumand phospholipid-binding protein first identified in synaptic vesicles, has been suggested to be a high-affinity AP-2-docking molecule (17). The mild phenotype of synaptotagmin I-null animals argues against this idea, but there are at least 7 other synaptotagmin isoforms, each able to bind AP-2 (18). If synaptotagmin is an AP-2-docking protein, the association with AP-2 must be tightly regulated because synaptotagmin I is predominantly a synaptic vesicle protein and synaptic vesicles are not clathrin coated until they are rapidly retrieved following their fusion with the presynaptic plasma membrane. Stage-specific assays for endocytosis show that early, but as yet uncharacterized events in clathrin-coat assembly at the cell surface require ATP hydrolysis (19). Polymerization of AP-2 and clathrin into coats on lysed synaptosomes also requires ATP (20). Recently, we showed that AP-2-containing clathrin coats can also assemble and invaginate on lysosomes in a strictly ATP-dependent fashion (21). Three-dimensional EM images reveal that the polyhedral lattice formed on the lysosome surface is identical to clathrin coats formed at the cell surface. Using this model system, we have now carefully dissected the role of ATP in initiating clathrin-coat assembly. Our results show that phosphoinositides, in particular phosphatidylinositol 4,5-bisphosphate (PtdIns(4,5)P 2 ), are critical regulators of AP-2 adaptor binding. Direct evidence for a positive feedback loop between type I phosphatidylinositol 4-phosphate 5-kinase (PIP5K) and phospholipase D (PLD) is presented. Both of the lipids generated by this regulatory loop, PtdIns(4,5)P 2 and phosphatidic acid (PtdOH), appear to play important roles in initiating clathrin-coat assembly on lysosomes. Antibodies-mAb 100/2, directed against the ␣ subunit of AP-2, was provided by E. Ungewickell. Polyclonal serum from a rabbit injected with a peptide corresponding to residues 11-29 of the mouse 2 sequence (22) was provided by J. Bonifacino. Cell lines producing the clathrin heavy chain-specific mAb TD.1 and mAb AP.6, that recognizes the ␣ subunit of AP-2, were provided by F. Brodsky. An antibody specific for the clathrin light chain neuron-specific insert, mAb Cl57.3, was obtained from R. Jahn. A dynamin-specific mAb (clone 41) was purchased from Transduction Laboratories and the neutralizing antibody against type II PtdIns 4-kinase (PI4K), mAb 4C5G (23), was obtained from UBI. Affinity purified rabbit anti-lgp120 antibodies and mAb YA30, against lgp85, were provided by K. Akasaki. Anti-␣-mannosidase II antisera were from K. Moreman. Affinity-purified antibodies against the cation-independent mannose 6-phosphate receptor (MPR) have been described elsewhere (24). The antibodies used to detect the PIP5Ks were affinity-purified anti-type I PIP5K, prepared using purified erythroid PIP5KI (25), and polyclonal anti-PIP5K type I␣ and anti-PIP5K type I␤ antibodies, each affinity purified using the appropriate recombinant protein (26,27). Serum from rabbits injected with a peptide matching the carboxyl terminus of PIP5KI␥ was provided by H. Ishihara and Y. Oka (28). Affinity-purified anti-type II PIP5K antibodies were prepared using recombinant PIP5KII␣ (27). Subcellular Fractionation and Protein Purification-Rat brain cytosol and rat liver Golgi membranes and lysosomes were prepared as described previously (5,21). Lysosomes were resuspended in 20 mM Tris-HCl, pH 7.4, 250 mM sucrose with 25 g/ml each of antipain, aprotinin, chymostatin, leupeptin, and pepstatin A (protease inhibitors). A plasmid bearing the SH3 domain of amphiphysin I fused to glutathione S-transferase (GST) was provided by H. McMahon (29). Bovine ADP-ribosylation factor 1 (ARF1), with amino acids 3-7 replaced with the corresponding residues from yeast ARF2, was from S. Kornfeld (8). Proteins were expressed in Escherichia coli and purified by standard procedures. For depletion of dynamin, 1-ml aliquots of cytosol (10 mg/ml) were mixed with 150 g of either GST or GST-amphSH3 overnight at 4°C. GSH-Sepharose was then added to collect the protein complexes and then removed by brief centrifugation. The supernatants were stored in small aliquots at Ϫ80°C while the pellets were washed 4 times in phosphate-buffered saline before solubilization in SDS sample buffer. Immunodepletion of AP-2 from cytosol with AP.6-Sepharose was exactly as described (5). For the separation of type I and type II PIP5Ks (25,30,31), 25 ml of rat brain cytosol was dialyzed into 5 mM sodium phosphate, pH 7.5, 0.25 M NaCl, 1 mM EDTA, 1 mM EGTA, 2 mM 2-mercaptoethanol, 10% glycerol, and 0.1 mM phenylmethylsulfonyl fluoride (buffer A) and then loaded at 1.5 ml/min onto a phosphocellulose column (1.6 ϫ 10 cm) pre-equilibrated in buffer A. After washing with 200 ml of buffer A, bound protein was eluted with a 250-ml linear gradient of 0.25-1.25 M NaCl in buffer A, collecting 4-ml fractions. Fractions were analyzed on immunoblots after concentration by methanol/chloroform precipitation or assayed for PIP5K activity after dialysis into buffer A lacking sodium chloride. For some experiments, the PIP5KI and PIP5KII pools were concentrated with a Centricon 10. When assayed for PtdInsP production with lysosomes, the PIP5KI and PIP5KII pools were exchanged into 25 mM Hepes-KOH, pH 7.2, 125 mM potassium acetate, 5 mM magnesium acetate, 5 mM EGTA, 1 mM dithiothreitol over NAP-5 columns. Membrane Binding Assays-Recruitment onto lysosomes was performed as outlined previously (21). Briefly, reactions were performed in 25 mM Hepes-KOH, pH 7.2, 125 mM potassium acetate, 5 mM magnesium acetate, 5 mM EGTA, 1 mM dithiothreitol in a volume of 400 l. All assays contained 25 g/ml of each of the protease inhibitors indicated above. Lysosomes were added to final concentrations of 30 -50 g/ml and cytosol to 2-2.5 mg/ml as indicated in the legends. ATP, an ATP regeneration system (1 mM ATP, 5 mM creatine phosphate, 10 units/ml creatine kinase), apyrase, A-3, neomycin, various alcohols, and PLD were added and mixed on ice. After 20 min at 37°C, reactions were stopped by chilling in an ice-water bath. Variations are noted in the figure legends. After centrifugation, membrane-containing pellets and aliquots of the supernatants were prepared for immunoblotting. Details of the conditions used for SDS-PAGE and immunoblotting have been published elsewhere (5,21). For thin-section EM analysis, glutaraldehyde was added to the chilled reactions to give a final concentration of 2% and, after 15 min on ice, the membranes were collected by centrifugation and processed as detailed elsewhere (21). The immunofluorescence-based morphological binding assay, using digitonin-permeabilized NRK cells, was carried out on glass coverslips exactly as described (21). Cells were fixed with 3.7% formaldehyde in phosphate-buffered saline for 20 min and then processed for indirect fluorescence microscopy. Measurement of Phosphoinositide Kinase and PLD Activity-Synthesis of phosphoinositides on lysosomal membranes was assayed in 25 mM Hepes-KOH, pH 7.2, 125 mM potassium acetate, 5 mM magnesium acetate, 5 mM EGTA, 1 mM dithiothreitol in a final volume of 50 l. Membranes were added to a final concentration of 0.5 mg/ml and cytosol, when present, to 5 mg/ml. Incubations were initiated by the addition of [␥-32 P]ATP (0.5-1 Ci/mmol) to a final concentration of 500 M. After 10 min at 37°C, the reactions were terminated by addition of 3 ml of chloroform:methanol:concentrated HCl (200:100:0.75), followed by vigorous mixing. Carrier phosphoinositides (50 g/tube) were added and then a biphasic mixture generated by addition of 0.6 ml of 0.6 M HCl. After centrifugation at 200 ϫ g ave for 5 min, the lower organic phase was removed, transferred to a new tube, washed twice with 1.5 ml of chloroform, methanol, 0.6 M HCl (3:48:47) and then dried under a stream of N 2 gas at about 40°C. Dried lipid films were resuspended in chloroform:methanol:H 2 O (75:25:1) and spotted onto TLC plates. Formation of PtdIns(4,5)P 2 from either commercial PtdIns(4)P or synthetic PtdIns(5)P was assayed in a final volume of 50 l in 50 mM Tris-HCl, pH 7.5, 10 mM magnesium acetate, 1 mM EDTA, 80 M PtdInsP, 50 M [␥-32 P]ATP (5 Ci/mmol), and a source of enzyme. PtdInsP micelles were prepared by resuspending the dried lipid at 5 mg/ml in 20 mM Tris-HCl, pH 8.5, 1 mM EDTA, and stored at Ϫ80°C in small aliquots. Reaction mixtures were equilibrated to ϳ25°C for 5 min before starting the assays with the addition of ATP. After 6 min at ϳ25°C, 3 ml of chloroform:methanol:concentrated HCl (200:100:0.75) was added and the lipids extracted as described above except that no carrier lipid was added and the lower organic phase was only washed once. PLD activity was assayed in 50 mM Hepes-KOH, pH 7.5, 250 mM sucrose, 80 mM potassium chloride, 4.5 mM magnesium chloride, 3 mM calcium chloride, 3 mM EGTA, 1 mM dithiothreitol, protease inhibitors, and 0.5% 1-butanol in a final volume of 60 l (32). Organelles were added to a final concentration of 160 g/ml and activators were added as follows: a final concentration of 0.85 M ARF1 together with 100 M GTP␥S, and 0.1 M PKC␣ with 1 M phorbol 12-myristate 13-acetate. Exogenous substrate (mixed micelles of phosphatidyethanolamine, [ 14 C]PtdCho (52 mCi/mmol), PtdIns(4,5)P 2 at a molar ratio of 6.7:1.15:1) was added to a final lipid concentration of 132 M, prepared as described (32). Additions were made on ice followed by incubation at 37°C for 1 h. Assays were stopped by addition of 1 ml of chloroform:methanol: concentrated HCl (50:50:0.3) and 0.35 ml of 1 M HCl, 5 mM EDTA, followed by vigorous mixing. After centrifugation at 370 ϫ g max for 5 min, 0.4 ml of the lower organic phase was removed and dried under a stream of N 2 gas. Dried lipid films were resuspended in chloroform: methanol:concentrated HCl (50:50:0.3) and spotted onto TLC plates. Lipid Analysis-Phosphoinositides were resolved on silica gel HPTLC plates that had been previously immersed in 1% potassium oxalate, 2 mM EDTA in 50% ethanol. Lipids were spotted onto the plates after activation at 105°C for several hours. The developing solvent consisted of chloroform:acetone:methanol:acetic acid:water (160:60:52: 48:28). PtdOH and phosphatidylbutanol (PtdBut) were resolved from PtdCho on heat-activated Silica Gel LK6 plates in a solvent system of chloroform:methanol:acetic acid (13:5:1). After chromatography, lipid standards were visualized with iodine vapor and marked. Radiolabeled lipids were visualized by autoradiography. Plates containing 14 C-labeled lipids were sprayed with Enhance before exposure to film. Signals were quantitated by scintillation counting after scraping the relevant portions of the plate into vials. For the anion-exchange HPLC analysis, the relevant lipid spots were scraped off the plates and deacylated directly on the silica with methylamine reagent after addition of authentic [ 3 H]PtdIns(4)P or [ 3 H]PtdIns(4,5)P 2 (NEN Life Science Products). The water-soluble products were applied to a Partisil 10 SAX column and eluted with a gradient of 0 -1 M ammonium phosphate, pH 3.8, as described elsewhere (27). The Kinase Inhibitor A-3 Prevents AP-2 Recruitment onto Lysosomes-The recruitment of AP-2 onto purified lysosomes and subsequent clathrin lattice assembly is absolutely dependent on hydrolyzable ATP (21) (Fig. 1). This makes it unlikely that coat assembly is initiated simply by the direct association of AP-2 adaptors with sorting signals located on the cytosolic domain of lysosomal glycoproteins or plasmamembrane proteins which have made their way to the limiting membrane of the lysosome. When added to gel-filtered cytosol, ATP supports coat assembly with an EC 50 of approximately 100 M (Fig. 1A), suggesting that a phosphorylation event might be involved rather than constant ATP hydrolysis to actively drive coat assembly. Indeed, a broad specificity kinase inhibitor, the naphthalenesulfonamide A-3 (33), blocks clathrin assembly on lysosomes (Fig. 1B). Compared with the recruitment of AP-2 and clathrin seen when purified lysosomes are incubated at 37°C with brain cytosol and 500 M ATP (lane e), addition of equimolar (lane i) or higher (lane k) concentrations of A-3 fully inhibits the translocation of the ␣ and 2 subunits of the AP-2 complex and clathrin onto lysosomes. The amount of these proteins recovered in the pellets (lanes i and k) is equivalent to that found in the pellets from incubations lacking lysosomes (lanes h and j). Coat recruitment is little changed by addition of 250 M A-3 (lane g), a competitive inhibitor with respect to ATP. Correlation between PtdIns(4,5)P 2 Formation and Clathrincoat Assembly-Three lines of evidence suggest that inositide phosphorylation might be important for the initiation of clathrin-coat assembly on lysosomes. First, poorly hydrolyzable analogues of ATP, AMP-PNF, and ATP␥S, do not support coat assembly (21) and neither derivative serves as a phosphate donor for phosphoinositide synthesis (34). Second, neomycin, a polyamine antibiotic that binds to PtdIns(4)P and PtdIns(4,5)P 2 with high affinity (35) and has been widely used to intercede in phosphoinositide-regulated processes, inhibits clathrin-coat assembly on purified lysosomes in a dose-dependent fashion. Inhibition is apparent at 100 M and, in the pres-ence of 300 M neomycin, the recruitment of both the AP-2 complex and clathrin is completely blocked, indicating that the compound inhibits an early stage of the assembly reaction (data not shown). A similar observation has been made in a study of AP-2 recruitment onto endosomes (36). Third, A-3mediated inhibition of AP-2 and clathrin recruitment (Fig. 1B) occurs together with a complete block of PtdIns(4,5)P 2 synthesis (Fig. 1C). Lysosomes exhibit intrinsic PI4K activity (37,38) and mixing purified lysosomes with ATP allows this enzyme to phosphorylate endogenous PtdIns, generating PtdIns(4)P in a temperature-sensitive reaction (Fig. 1C, lane b compared with lane a). Anion-exchange HPLC analysis of the deacylated product of this lipid shows exact co-elution with an internal [ 3 H]glycerophosphorylinositol (GroPIns) 4-phosphate standard (data not shown), verifying this lipid as PtdIns(4)P. Addition of brain cytosol to an incubation containing lysosomes and ATP results in the synthesis of PtdIns(4)P and a second phospholipid, which comigrates with an authentic PtdIns(4,5)P 2 standard (Fig. 1C, lane d). The deacylated product of this lipid elutes coincidentally with a [ 3 H]GroPIns(4,5)P 2 standard (data not shown). Including 1 mM A-3, which completely abrogates AP-2 and clathrin recruitment onto the lysosome surface (Fig. 1B), totally inhibits the activity of the soluble PIP5K (Fig. 1C, lane f) without affecting the PI4K activity. These experiments show that mixing lysosomes with cytosol and ATP allows for a robust synthesis of PtdIns(4,5)P 2 on the lysosome surface and suggest that this lipid might be important for the initiation of clathrin coat formation on lysosomes. Coat Assembly on Lysosomes Begins with AP-2 Adaptor Recruitment-The pleckstrin homology domain is a modular protein fold that appears to regulate the translocation of several soluble proteins onto membranes by binding to various phosphoinositides (39,40). Because PtdIns(4,5)P 2 is generated on the lysosome in our assays, and because dynamin contains a pleckstrin homology domain that specifically binds to PtdIns(4,5)P 2 (41), it was possible that we had reconstituted coat assembly in reverse. Dynamin translocation onto PtdIns(4,5)P 2 -containing lysosomes could trigger the recruitment of amphiphysin (29,42) which, in turn, could then recruit AP-2, synaptojanin and clathrin (43)(44)(45). In fact, this reverse reaction is known to occur in vitro (46). If dynamin recruitment does initiate clathrin-coat assembly on lysosomes, then depleting this protein from brain cytosol should abrogate coat formation in vitro. Dynamin was selectively removed from rat brain cytosol using a GST-amphSH3 domain fusion protein (29). For comparison, mock-depleted cytosol, treated with GST, and AP-2-depleted cytosol (5), were also used. Examination of the specificity ( Fig. 2A) and extent of depletion (Fig. 2B) confirms that dynamin and AP-2 removal is virtually complete but that the resulting depleted cytosols still contain normal levels of several other major polypeptides known to participate in clathrin coat formation. Eliminating dynamin from cytosol does not alter clathrincoat assembly on lysosomes detectably. AP-2 recruitment is evident both from the loss of the adaptor in the soluble fraction (Fig. 2B, lane e compared with lane d) and from the simultaneous appearance of the adaptor complex in the lysosome pellet (Fig. 2C, lane e compared with lane d). Clathrin behaves similarly. This is identical to the results obtained with the mockdepleted cytosol (lanes b and c). Removing AP-2 from the donor cytosol, however, has a dramatic effect on coat formation. Very little AP-2 and clathrin are found on the pelleted lysosomes (Fig. 2C, lane g compared with lanes c and e) and no change in the amount of clathrin in the soluble fraction is evident (Fig. 2B, lane g compared with lane f). This verifies that AP-2 binding is necessary for subsequent clathrin recruitment. Although indicated. After 20 min at 37°C, the tubes were centrifuged and the pellets analyzed by immunoblotting with anti-AP-2 ␣-subunit mAb 100/2, anti-AP-2 2-subunit serum, anti-clathrin heavy chain (HC) mAb TD.1, anti-clathrin light chain (LC) mAb Cl57.3 or anti-lgp85 mAb YA30. The mobility of M r standards is indicated on the left and only the relevant portion of each blot is shown. Note that when ATP is limiting, light chain-free clathrin appears to aggregate and precipitate from the cytosol and is recovered in the pellet fractions (lanes h-k). C, reactions containing 0.5 mg/ml lysosomes, 5 mg/ml cytosol, 500 M [␥-32 P]ATP, and 1 mM A-3 were prepared as indicated. After incubation at 37°C for 10 min the lipids were extracted and analyzed by TLC and autoradiography. The migration positions of authentic phospholipid standards, visualized with iodine, are indicated. FIG. 1. Clathrin-coat assembly and phosphoinositide synthesis on lysosomes. A, reactions containing 30 g/ml purified liver lysosomes, ϳ2.5 mg/ml gel-filtered rat brain cytosol, and 0 -1 mM ATP were prepared as indicated. After 20 min at 37°C, the tubes were centrifuged and the pellets analyzed by immunoblotting with anti-AP-2 ␣-subunit mAb 100/2. AP-2 binding is expressed as the percent of maximal obtained with an ATP-regeneration system (1 mM ATP, 5 mM creatine phosphate, 10 units/ml creatine kinase) and the mean Ϯ S.D. of four independent determinations is shown. B, reactions containing 30 g/ml purified liver lysosomes, 2.5 mg/ml gel-filtered cytosol, an ATP-regeneration system (ATP r ), 500 M ATP, and 0 -1 mM A-3 were prepared as the pellets from incubations with dynamin-containing cytosol do contain dynamin (Fig. 2C, lanes c and g), equivalent amounts of dynamin are also recovered in the pellets from incubations lacking lysosomes (lanes b and f). This simply reflects the propensity of dynamin to form extensive homooligomers, but makes it difficult to discern whether dynamin is also being actively recruited onto the clathrin-coated buds that form on the lysosomes. These experiments establish that in our system clathrin-coat assembly follows what is considered the physiological sequence, with adaptor recruitment beginning the assembly process. The generation of PtdIns(4,5)P 2 may facilitate AP-2 binding and dynamin is not required for the early stages of lattice assembly. This is in agreement with our earlier results showing that GTP␥S does not modulate adaptor recruitment in the lysosomal system (21). Characterization of the Inositide Kinases Involved in PtdIns(4,5)P 2 Formation on Lysosomes-PI4Ks are divided into two distinct subfamilies, designated type II and type III (23). Type II enzymes are predominantly membrane-associated, whereas type III activity is mainly found in soluble extracts, although some type III activity is also associated with the particulate fraction (38,47). The type III enzymes are also completely inhibited by micromolar concentrations of wortmannin (47)(48)(49) and, on this basis, can be distinguished from the type II PI4K, which is wortmannin insensitive. The generation of PtdIns(4)P on lysosomes is unaffected by up to 20 M wortmannin (data not shown), suggesting that the lysosomal PI4K is most likely a type II enzyme. This conclusion is strengthened by the ability of a neutralizing anti-type II antibody, mAb 4C5G (23), to inhibit PtdIns(4)P generation in a dose-dependent fashion if the lysosomes are preincubated with the antibody prior to the addition ATP (data not shown), and is in accord with a study examining the subcellular distribution of PI4K isoforms (38). It is also important to note that the insensitivity of PtdIns phosphorylation to micromolar concentrations of wortmannin again rules out that the labeled lipid generated on lysosomes is the product of PI3K activity. Two peaks of PIP5K activity are resolved after fractionating rat brain cytosol on phosphocellulose (Fig. 3A). The first and major peak, designated type I PIP5K (25,30,31), elutes with approximately 0.6 M NaCl and is composed of at least three distinct enzymes, PIP5K types I␣, -I␤, and -I␥ (26,28,50). The different type I PIP5Ks are the products of separate genes (26, 28, 50) but, because the central kinase domains of these enzymes are about 80% identical, all these polypeptides are detected on immunoblots by affinity-purified antibodies raised against type I PIP5K purified from erythrocytes (25) (Fig. 3B). The identity of the ϳ90-kDa type I␥, the 68-kDa type I␣, and the 67-kDa type I␤ enzymes is confirmed on duplicate blots probed with isoform-specific antibodies, however (Fig. 3B). The minor peak of type II PIP5K activity elutes from phosphocellulose with about 1.1 M NaCl (Fig. 3A). Again, there are two main isoforms known, the ϳ53-kDa type II␣ and type II␤, both being detected with an affinity-purified anti-type II antibody (Fig. 3B). To determine whether the type I, type II, or both types of PIP5K can phosphorylate the PtdIns(4)P generated on lysosomes to form PtdIns(4,5)P 2 , pooled fractions enriched with each activity from the phosphocellulose column were added to purified lysosomes in the presence of [␥-32 P]ATP (Fig. 3C). Only the type I PIP5K pool produces PtdIns(4,5)P 2 (lane f). The type II pool is inactive (lane h) although PtdIns(4)P is generated as a potential substrate. This agrees with previous data showing that the type II PIP5K does not phosphorylate intrin-sic PtdIns(4)P in phospholipid bilayers (30). In fact, unlike the type I PIP5Ks, the type II pool shows a marked preference for PtdIns(5)P over PtdIns(4)P as a substrate (Fig. 3D), confirming that these enzymes appear to be predominantly PtdIns(5)P 4-kinases (51). Because type I PIP5Ks phosphorylate not only PtdIns(4)P but 3-phosphate-containing phosphatidylinositols as well (27,52), we again checked that the lipid produced by the type I pool is, in fact, PtdIns(4,5)P 2 . HPLC analysis of the deacylated product also shows precise co-elution with the [ 3 H]GroPIns(4,5)P 2 standard (data not shown). PtdIns(4,5)P 2 synthesis on lysosomes catalyzed by the partially purified type I-PIP5K pool is dose dependent (Fig. 3E, lanes c and e) and, importantly, is completely abolished by including A-3 in the assay (lanes d and f), just as is seen with whole cytosol (lanes a and b). Taken together, our results show that a type II PI4K located on the lysosome membrane phosphorylates PtdIns to produce PtdIns(4)P, which, in turn, is phosphorylated by a soluble, A-3-sensitive type I PIP5K to generate PtdIns(4,5)P 2 . PtdOH Generation Precedes AP-2 Adaptor Recruitment-The in vitro activity of all known type I PIP5Ks is strongly stimulated by PtdOH (25,28). Two main pathways for the generation of PtdOH are known. One occurs by the phosphorylation of diacylglycerol by diacylglycerol kinase while a second is via the hydrolysis of the polar head group of PtdCho catalyzed by PLD. Only trace amounts of PtdOH are generated by phosphorylation in our assays (Fig. 1C) so to investigate whether PLD activity is required to generate PtdOH necessary to drive the synthesis of PtdIns(4,5)P 2 by type I PIP5K, we checked the effect of 1-butanol on the synthesis of this lipid. PLD catalyzes a distinctive transphosphatidylation reaction producing phosphatidylalcohols in the presence of primary alcohols. Since phosphatidylalcohols are unable to substitute for the biologically active PtdOH, primary alcohols inhibit PLD-dependent reactions. When added at 1.5%, 1-butanol has a dramatic effect on PtdIns(4,5)P 2 formation (Fig. 4A, lane e compared with lane d) while tertiary butanol is inert (lane f). 1-Butanol similarly prevents the formation of PtdIns(4,5)P 2 catalyzed by the semipurified type I PIP5K pool. 2 Significantly, the effect of 1-butanol on PtdIns(4)P synthesis is considerably weaker (lanes a and d compared with lanes b and e). To determine whether PtdOH generated by PLD is important for clathrin-coat assembly, we tested the effect of adding alcohols to our recruitment assays. Adding 1-butanol to the standard assay inhibits the recruitment of AP-2 onto lysosomes in a dose-dependent manner (Fig. 4B). At 1.5%, inhibition is near complete while at the same concentration t-butyl alcohol has little effect (Fig. 4B, lane n compared with lane d). Analogous results are obtained with 1-propanol, which also abrogates coat assembly, and 2-propanol, which has no effect. 2 Taken together, these experiments confirm the existence of a positive feedback regulatory loop between PIP5KI and PLD (53) and indicate that, in our system, PIP5KI activity is largely dependent on PtdOH. The generation of PtdOH on the lysosome is a prerequisite for the initiation of coat assembly, which, based on our results, is required for the synthesis of PtdIns(4,5)P 2 . PLD1-like Activity Is Associated with Purified Rat Liver Lysosomes-Two forms of PtdIns(4,5)P 2 -regulated PtdCho-specific PLD, termed PLD1 and PLD2, have been identified in mammals (54,55). To verify that the PLD activity observed in our assay is indeed lysosome-associated, we used an exogenous substrate assay performed in the presence of 1-butanol. This FIG. 3. PtdIns(4,5) P 2 synthesis on lysosomes catalyzed by type I PIP5K. A, rat brain cytosol was loaded onto a phosphocellulose column and bound proteins eluted with a linear gradient of 0.25-1.25 M NaCl. An aliquot of every second fraction was assayed for PIP5K activity using PtdIns(4)P micelles as a substrate. The two peaks of PIP5K activity resolved, type I and type II, are indicated. B, aliquots of every second fraction from a similar column run were analyzed on immunoblots with affinity-purified antierythroid PIP5KI antibodies (a), affinitypurified anti-PIP5KI␣ antibodies (b) affinity-purified anti-PIP5KI␤ antibodies (c), anti-PIP5KI␥ serum (d), or affinitypurified anti-PIP5KII antibodies (e). C, reactions containing 0.5 mg/ml lysosomes, 5 mg/ml brain cytosol, 37.5-l aliquots of either the PIP5KI or the PIP5KII pools and 500 M [␥-32 P]ATP were prepared as indicated. After 10 min at 37°C, lipids were extracted and analyzed by TLC and autoradiography. D, reactions containing 80 M PtdIns(4)P or synthetic PtdIns(5)P, 25-l aliquots of either the PIP5KI or the PIP5KII pool and 50 M [␥-32 P]ATP were prepared as indicated. After 6 min at 25°C, lipids were extracted and analyzed by TLC and autoradiography. Note that the mobility of the PtdIns(4,5)P 2 made from the synthetic PtdIns(5)P differs from that of PtdIns(4,5)P 2 made from PtdIns(4)P because of differences in the acyl chain composition. E, reactions containing 0.5 mg/ml lysosomes, 5 mg/ml brain cytosol, 35-l aliquots of either unconcentrated (lanes c and d) or 5-fold concentrated (lanes e and f) PIP5KI pool, 500 M [␥-32 P]ATP, and 1 mM A-3 were prepared as indicated. After 10 min at 37°C, lipids were extracted and analyzed by TLC and autoradiography. Note that the slightly aberrant migration position of PtdIns(4,5)P 2 in lane c of this experiment was identical to the mobility of the carrier PtdIns(4,5)P 2 in that lane visualized with iodine vapor. enables us to follow the stable product of transphosphatidylation, PtdBut. A low level of PtdCho cleavage occurs upon mixing purified lysosomes with the exogenous reporter micelles (Fig. 5A, column b) indicating that PLD activity is present on lysosomes. Activators of PLD1, ARF1, and PKC␣, augment the hydrolysis of the exogenous PtdCho. Added alone, ARF1⅐ GTP␥S stimulates PLD (column c) considerably better than PKC␣ and phorbol 12-myristate 13-acetate (column d) but, together, the two proteins synergize to stimulate PLD activity maximally (column e). Cleavage of PtdCho to produce PtdBut is totally dependent on PtdIns(4,5)P 2 and negligible PtdBut formation is seen when liposomes lacking the phosphoinositide are used as a substrate (data not shown). The production of PtdBut verifies the existence of PLD on the lysosome surface and the specific activity of the activated lysosomal enzyme (ϳ2.5 nmol/mg/h) compares favorably with PLD1 activity measurements on other membrane preparations, although our data cannot be compared directly to assays quantitating choline head group release because 1-butanol also inhibits the catalytic activity of PLD (56). For comparison then, we also analyzed purified Golgi membranes, which are known to contain ARF-stimulated PLD1 activity (57). The catalytic activity associated with Golgi responds to PLD1 activators very similarly to that associated with lysosomes (Fig. 5A, columns f-i). The relative purity of the two organelle preparations is shown on immunoblots probed with antibodies against ␣-mannosidase II, lgp120, and lgp85 (Fig. 5B). Lysosomes have no detectable mannosidase II but are heavily enriched with the lysosomal membrane proteins, as expected. The gross protein profiles of the two organelle preparations are also clearly different from each other (lanes b and c) and from the total liver homogenate (lane a). These results show that like Golgi membranes, purified rat liver lysosomes also contain membrane-associated PLD1-like activity and the relative abundance of this enzyme on lysosomes makes it unlikely that the activity comes from contaminating Golgi or endoplasmic reticulum structures. PtdOH Alone Drives Limited Clathrin Coat Formation-Since PtdOH is a potent biological mediator itself, the question arises whether the critical agent generated by the positive feedback loop between PLD and PIP5KI that is necessary to initiate clathrin-coat assembly is PtdIns(4,5)P 2 , PtdOH, or both of these lipids. If PtdIns(4,5)P 2 is required only to stimulate PLD1-dependent formation of PtdOH, then it should be possible to promote nucleotide-independent coat assembly on lysosomes in vitro using exogenous PLD. When 0.5 unit/ml S. chromofuscus PLD, which is not dependent on PtdIns(4,5)P 2 for FIG. 4. Primary alcohols inhibit PtdIns(4,5)P 2 synthesis and coat assembly. A, reactions containing 0.5 mg/ml lysosomes, 5 mg/ml cytosol, 1.5% 1-butanol or t-butanol, and 500 M [␥-32 P]ATP were prepared as indicated. After 10 min at 37°C, lipids were extracted and analyzed by TLC and autoradiography. B, reactions containing 50 g/ml lysosomes, 2 mg/ml cytosol, 1 mM ATP, and 0 -2% 1-butanol or t-butanol were prepared as indicated. After 15 min at 37°C, membranes were recovered by centrifugation and the pellets analyzed by immunoblotting with anti-␣-subunit mAb 100/2 or anti-2-subunit serum. Note that the t-butanol experiment comes from a separate blot on which the immunoreactivity in the presence of 2% t-butanol did not differ significantly from the signal obtained in the absence of the alcohol. ]PtdCho:PtdIns(4,5)P 2 -containing substrate liposomes, and 0.5% 1-butanol were prepared as indicated. After 60 min at 37°C, lipids were extracted and analyzed by TLC and fluorography. Quantitation of PtdCho hydrolysis from a representative experiment of three (in which maximal stimulated PLD activity on lysosomes (column e) and Golgi (column i) ranged from 0.46 to 4.95 and 0.64 to 2.47 nmol/mg/h, respectively) is shown. B, analysis of lysosome and Golgi membrane markers. Samples of 25 g each of a total rat liver homogenate or purified liver lysosomes or liver Golgi membranes were fractionated by SDS-PAGE and either stained with Coomassie Blue (a-c) or transferred to nitrocellulose (d-f). Blots were probed with anti-mannosidase II serum, affinity-purified anti-lgp120 antibodies or anti-lgp85 mAb YA30. activity, 3 is added to a mixture of gel-filtered brain cytosol and lysosomes in the absence of ATP, AP-2 does translocate onto the lysosome surface (Fig. 6, lane g). No recruitment is seen in the absence of PLD (lane e). Adaptor binding is about 3-fold less efficient with PLD than the recruitment seen in the presence of ATP (lane g compared with lane c), but is nevertheless accompanied by clathrin recruitment. Interestingly, adding up to 8 units/ml of the bacterial PLD does not increase the amount of coat components associated with the lysosomes in the absence of ATP. 3 Incubations containing PLD together with apyrase or A-3 rule out the possibility that the bacterial PLD facilitates coat recruitment by stimulating PIP5KI to generate PtdIns(4,5)P 2 using trace ATP remaining in the gel-filtered cytosol. Neither AP-2 nor clathrin recruitment is markedly altered by including either 10 units/ml apyrase or 100 M A-3 (Fig. 6, lanes i and k compared with lane g). Thin-section EM analysis verifies that the bacterial PLD indeed promotes the assembly of identifiable clathrin coats (Fig. 7). Our purified lysosome preparations consist predominantly of dense-core organelles and characteristic bristle-like areas of assembled clathrin can occasionally be seen on the lysosomes after incubation with gel-filtered cytosol and bacterial PLD (panels a and b). While the coat appears indistinguishable from the clathrin-coated buds that readily form on lysosomes incubated with cytosol and ATP (panels c and d), we do not find any evidence for coated bud formation in the presence of PLD alone. The PLD treatment also results in significant rupture of a proportion of the lysosomes, a phenomenon rarely seen with ATP. Empty lysosomal membrane fragments and free lumenal material are observed after the incubation with the bacterial enzyme (panels a and b, arrows). We conclude that PtdOH plays an important role in initiating AP-2 and clathrin assembly on lysosomes. Additional ATP-or PtdIns(4,5)P 2 dependent steps are, however, required for extensive lattice formation and invagination. This is in agreement with our biochemical data showing only limited recruitment of clathrin and AP-2. Bacterial PLD Targets AP-2 to Lysosomes in Permeabilized NRK Cells-The rupture of lysosomes noted after adding PLD reiterates that this enzyme hydrolyzes PtdCho rather indiscriminately. Because PtdOH has also been linked to the assembly of COPI-coated vesicles on the Golgi (58), COPII coats at endoplasmic reticulum export sites (59,60), protein export from the TGN (61)(62)(63) and to clathrin-coat assembly (36), we examined whether the recruitment of AP-2 in the presence of exogenous PLD shows compartmental specificity. Precise targeting of AP-2 onto lysosomes is seen in permeabilized NRK cells incubated with cytosol and ATP. An extremely high incidence of colocalization of AP-2 with the lysosomal glycoproteins lgp120 (Fig. 8, panels a and b) and lgp85 2 is evident. In this system, AP-2 and clathrin recruitment is also totally dependent on ATP. No AP-2 translocates onto lgp120-positive structures in the absence of ATP (panels c and d) unless PLD is added (panels e and f). This again suggests that PLD appears to function downstream of PIP5K. In addition, although the colocalization of AP-2 with lgp120 is less exact when PLD is used, the recruitment is still compartmentally regulated. The staining pattern of the recruited adaptor is clearly different from that of ␣-mannosidase II, a medial Golgi marker (panels g and h). In most cells, the adaptor complex targets onto membranes that are spatially distinct from the MPR-containing late-endosomal elements (panels i and j). While some overlap between the MPR-positive structures and the recruited AP-2 is seen in a few cells (panels i and j, arrowheads) FIG. 7. EM analysis of clathrin-coat assembly on lysosomes. Reactions containing 120 g/ml lysosomes, 3 mg/ml gel-filtered cytosol, and either 0.5 unit/ml S. chromofuscus PLD (a and b) or an ATP regeneration system (c and d) were prepared on ice. After incubation at 37°C for 20 min the tubes were chilled, fixed, and then processed for EM. Selected images that are typical of the coated structures (arrowheads) formed under each condition are shown. Note that no budding of the coated profiles (arrowheads) is seen after PLD treatment and significant rupture of the lysosomes, resulting in both free limiting membrane fragments (fine arrows) and exposed lumenal material (bold arrows) occurs. Bar, 100 nm. spots. The similarity in staining may therefore reflect the close proximity of the late endosome and lysosome compartments in some of these cells. We do note that AP-2 recruitment onto remnants of the plasma membrane also becomes evident in the presence of the bacterial PLD (panels f, g, and i). Because PtdOH is likely to be generated on all membranes in these experiments, the results establish that AP-2 does not simply associate with PtdOH-rich membranes; additional determinants at the site of coat assembly appear necessary to initiate adaptor recruitment. FIG. 8. Bacterial PLD initiates adaptor recruitment onto lysosomes. Digitonin-permeabilized NRK cells were incubated with 2.5 mg/ml gel-filtered cytosol and 1 mM ATP (a and b) or without ATP (c-j) and with 0.5 unit/ml S. chromofuscus PLD (e-j). After 20 min at 37°C the cells were washed, fixed, and then processed for indirect immunofluorescence using a mixture of affinity-purified anti-lgp120 antibodies (a, c, and e) and anti-␣-subunit mAb AP.6 (b, d, and f) or mAb AP.6 (g) and anti-mannosidase II serum (h) or mAb AP.6 (i) and affinity-purified anti-CI-MPR antibodies (j). The conditions for photography and printing were identical for a-f. The images are typical of each treatment and selected regions of colocalization of the lgp120 and AP-2 signals are indicated (arrowheads) as are cells in which substantial overlap between the AP-2 and MPR staining occurs. DISCUSSION The results of this study provide a framework to begin to understand the mechanisms that regulate the precise targeting of AP-2 onto membranes in greater molecular detail (Fig. 9). We find that both PtdIns(4,5)P 2 and PtdOH are important regulators of AP-2 recruitment and clathrin-lattice assembly on lysosomes. We show that PLD1-like activity is associated with purified lysosomes, consistent with the dispersed punctate distribution of transiently overexpressed PLD1 observed by others (54,64). Our results also demonstrate directly, in the context of a biological membrane, that a positive feedback loop exists between this enzyme and PIP5KI. PtdIns(4,5)P 2 is obligatory for PLD activity and PtdOH, the product of this activity, acts as an allosteric activator of PIP5KI activity. PLD Activation and AP-2 Recruitment-A clear role for PtdOH in the construction of a clathrin-coated bud is evident from the ability of bacterial PLD to induce AP-2 translocation and effect limited coat assembly under conditions that avert PtdIns(4,5)P 2 synthesis. While we cannot rule out that residual PtdIns(4,5)P 2 on the limiting membrane of the lysosome contributes to AP-2 recruitment under these conditions, PtdOH must play an important role because, in the absence of ATP, no recruitment occurs without the exogenously added PLD. The data also indicate that ongoing PtdIns(4,5)P 2 formation is not required to permit adaptor binding. Superficially, some of our results appear similar to those in a recent study examining the role of PLD activation in the targeting of AP-2 onto endosomes (36). However, GTP␥S is not at all required to induce coat formation in our system (21). Because the hydrolysis of exogenous PtdCho by the lysosomal PLD is potentiated by ARF⅐GTP␥S, we assume the lysosome-associated lipase to be PLD1-like. This corroborates the recent colocalization of GFP-PLD1b, the major PLD1 isoform expressed in most rat tissues (65), with LAMP-1-positive membranes (64). Nevertheless, the minimal effect of GTP␥S on clathrin-coat initiation suggests that activity of the PLD1-like enzyme toward lysosomal membrane phospholipids does not appear to be absolutely dependent on activated ARF. In fact, the precise physiological roles of the in vitro activators of PLD1, ARF, Rho, and PKCa, is still being debated (66). PtdOH appears to play a general role in regulating the assembly of several types of coated membrane (36, 46, 58 -63, 67), but it is unclear exactly how this is accomplished. One possibility is that direct interaction of certain phospholipids with coat proteins, in our case AP-2, initiates bud formation. It has recently been shown that clathrin-coated components can translocate onto synthetic liposomes, and commence the assembly of coated buds, in a nucleotide-independent fashion (46). The recruitment of AP-2 and clathrin from brain cytosol onto synthetic liposomes proceeds only poorly, however, and is considerably less efficient than the recruitment of dynamin, amphiphysin, and synaptojanin onto these same liposome preparations (46). Coat assembly equivalent to that seen when cytosol is mixed with synaptosome-derived membranes is only observed on the synthetic liposomes after fortifying the donor cytosol with ϳ0.5 mg/ml soluble AP-2 and clathrin extracted from purified clathrin-coated vesicles (46). This indicates that the interaction between AP-2 and lipids alone is a low-affinity phenomenon and, interestingly, no dramatic effect on AP-2 and clathrin binding or coat formation occurs on adding either phosphoinositides or PtdOH to the lipid vesicles (46). In permeabilized NRK cells treated with bacterial PLD under conditions that we expect PtdOH to be generated on all PtdChocontaining membranes, we show that AP-2 targets almost exclusively onto lysosomal glycoprotein-rich lysosomal membranes. This shows that lipids alone are not sufficient to nucleate coat assembly on any biological membrane. Nonetheless, we do find that the synthesis of relatively minor but potently active acidic phospholipids is pivotal in nucleating clathrin polymerization on lysosomes and the production of these lipids promotes exceptionally efficient coat formation in the presence of dilute cytosol. Our interpretation is that there are additional factors on the membrane that raise the affinity of AP-2 for the lysosomal membrane substantially. The growing consensus is that dedicated docking molecules, restricted to the bud site, probably provide the primary binding interface for adaptors on membranes (9). We favor the idea that PtdOH activates a putative AP-2-selective docking molecule (36,68) in a manner analogous to the way ARF activates an AP-1-specific docking site at the TGN (5,7,8). PtdIns(4,5)P 2 and Clathrin-coat Assembly-In the absence of PtdIns(4,5)P 2 production, we find that AP-2 binding to lysosomes is about 3-fold lower. At least two explanations for this result can be considered. First, as the bacterial PLD compromises the integrity of the lysosomal membrane, the reduced coat assembly and the failure of the assembled lattice to invaginate might simply reflect the damage inflicted on the membrane by the enzymatic treatment. Alternatively, the data could suggests that, in addition to serving as a co-factor for PLD1, PtdIns(4,5)P 2 might also contribute to the nucleation of coated structures directly. In fact, evidence for multiple roles for PtdIns(4,5)P 2 in clathrin-mediated endocytosis has just been published (69). Our data show that the lipids formed on lysosomes are PtdIns(4)P and PtdIns(4,5)P 2 , neither of which affects the affinity of AP-2 for tyrosine-based sorting signals (70). Therefore, direct modulation of tyrosine-based sorting FIG. 9. Schematic model illustrating the positive feedback loop between PIP5KI and PLD1 that occurs at the lysosome surface and how the products of this regulatory loop the appear to effect clathrin-coat assembly on lysosomes. signals by polyphosphoinositides is unlikely to account for increased binding of AP-2 to lysosomes (70). PtdIns(4,5)P 2 could anchor AP-2 to the lysosome surface directly, since a highaffinity inositol polyphosphate-binding site is located within the amino-terminal 50 residues of the ␣ subunit of the AP-2 heterotetramer (71). Association of AP-2 with PtdIns(4,5)P 2 might orient the molecule for optimal association with sorting signals and together, these two attachments might transiently retain an adaptor on the membrane if release from the putative docking site proceeds clathrin recruitment. Retrograde Traffic from the Lysosome-The physiological significance of clathrin-coat formation on lysosomes is somewhat controversial. While the evidence for an outward pathway of membrane flow is only indirect in mammalian cells, we reasoned that a retrograde route from the lysosome would be important to maintain overall membrane homeostasis and speculated that this would provide a suitable mechanism for recycling regulatory molecules, like v-SNARES (21). There is now very good evidence for retrograde movement from the Saccharomyces cerevisiae vacuole, the yeast equivalent of the lysosome. Strikingly, a phosphatidylinositol 3-phosphate 5-kinase, Fab1p, is intimately connected with the regulation of the size of the vacuole (72,73). Within 30 min of shifting to the restrictive temperature, the vacuole of a fab1 temperature conditional mutant more than doubles in size, prompting the conclusion that phosphoinositide synthesis is a critical regulator of the size and integrity of the vacuole, probably by modulating transport into or out of this organelle (72). Direct evidence for protein flux through the vacuole has recently been obtained (74). It is not known yet whether clathrin and adaptors are involved in the return of proteins from the yeast vacuole and, certainly, the AP-2-dependent clathrin-coat assembly we see on lysosomes in our in vitro assays is an exaggerated phenomenon. Nevertheless, the yeast data, and the presence of both PI4K and PLD1 activity on the lysosome membrane suggest, but do not prove, that clathrin-coat assembly on lysosomes occurs in a tightly regulated manner (54). Some of the regulatory constraints appear to be lost in the in vitro systems that we use. It is important to note, however, that massive AP-2containing clathrin-coat assembly does occur on lysosomes under some conditions in vivo (21,75). More importantly, perhaps, our preliminary experiments analyzing clathrin-coat assembly on preparations of highly purified synaptic plasma membrane, and other independent studies (69), reveal that PtdIns(4,5)P 2 is also a critical regulator of AP-2 recruitment and clathrin polymerization at the cell surface. Our current interpretation then is that the biochemistry that underlies coat assembly on the lysosome is very likely to be similar to the process that occurs at the cell surface.
11,349.8
1999-06-18T00:00:00.000
[ "Biology", "Chemistry", "Computer Science" ]
When Do You Need Billions of Words of Pretraining Data? NLP is currently dominated by language models like RoBERTa which are pretrained on billions of words. But what exact knowledge or skills do Transformer LMs learn from large-scale pretraining that they cannot learn from less data? To explore this question, we adopt five styles of evaluation: classifier probing, information-theoretic probing, unsupervised relative acceptability judgments, unsupervised language model knowledge probing, and fine-tuning on NLU tasks. We then draw learning curves that track the growth of these different measures of model ability with respect to pretraining data volume using the MiniBERTas, a group of RoBERTa models pretrained on 1M, 10M, 100M and 1B words. We find that these LMs require only about 10M to 100M words to learn to reliably encode most syntactic and semantic features we test. They need a much larger quantity of data in order to acquire enough commonsense knowledge and other skills required to master typical downstream NLU tasks. The results suggest that, while the ability to encode linguistic features is almost certainly necessary for language understanding, it is likely that other, unidentified, forms of knowledge are the major drivers of recent improvements in language understanding among large pretrained models. Introduction Pretrained language models (LMs) like BERT and RoBERTa have become ubiquitous in NLP. New models require massive datasets of tens or even hundreds of billions of words (Brown et al., 2020) to improve on existing models on language understanding benchmarks like GLUE (Wang et al., 2018). Much recent work has used probing methods to evaluate what these models do and do not For each method, we compute overall performance for each RoBERTa model tested as the macro average over sub-task's performance after normalization. We fit an exponential curve which we scale to have an initial value of 0 and an asymptote at 1. Classifier and MDL probing mainly test models' encoding of linguistic features; BLiMP tests model's understanding of linguistic phenomena; LAMA tests factual knowledge; SuperGLUE is a suite of conventional NLU tasks. learn (Belinkov and Glass, 2019;Tenney et al., 2019b;Rogers et al., 2020;Ettinger, 2020). Since most of these works only focus on models pretrained on a fixed data volume (usually billions of words), many interesting questions regarding the effect of the amount of pretraining data remain unanswered: What have data-rich models learned that makes them so effective on downstream tasks? How much pretraining data is required for LMs to learn different grammatical features and linguistic phenomena? Which of these skills do we expect to improve when we scale pretraining past 30 billion words? Which aspects of grammar can be learned from data volumes on par with the input to human learners, around 10M to 100M words (Hart and Risley)? With these questions in mind, we evaluate and probe the MiniBERTas (Warstadt et al., 2020b), a group of RoBERTa models pretrained on 1M, 10M, 100M, and 1B words, and RoBERTa BASE pretrained on about 30B words, using five methods: First we use standard classifier probing on the edge probing suite of NLP tasks (Tenney et al., 2019b) to measure the quality of the syntactic and semantic features that can be extracted by a downstream classifier with each level of pretraining. Second, we apply minimum description length (MDL) probing (Voita and Titov, 2020) to the edge probing suite, with the goal of quantifying the accessibility of these features. Third, we test the models' knowledge of various syntactic phenomena using unsupervised acceptability judgments on the BLiMP suite (Warstadt et al., 2020a). Fourth, we probe the models' world knowledge and commonsense knowledge using unsupervised language model knowledge probing with the LAMA suite (Petroni et al., 2019). Finally, we fine-tune the models on five tasks from SuperGLUE to measure their ability to solve conventional NLU tasks. For each evaluation method, we fit an exponential learning curve to the results as a function of the amount of pretraining data, shown in Figure 1. We have two main findings: First, the results of classifier probing, MDL probing, and unsupervised relative acceptability judgement (BLiMP) show that the linguistic knowledge of models pretrained on 100M words and 30B words is similar, as is the description length of linguistic features. Second, RoBERTa requires billions of words of pretraining data to effectively acquire factual knowledge and to make substantial improvements in performance on dowstream NLU tasks. From these results, we conclude that there are skills critical to solving downstream NLU tasks that LMs can only acquire with billions of words of pretraining data. Future work will likely need to look beyond core linguistic knowledge if we are to better understand and advance the abilities of large language models. Methods We probe the MiniBERTas, a set of 12 RoBERTa models pretrained from scratch by Warstadt et al. (2020b) on 1M, 10M, 100M, and 1B words, the publicly available RoBERTa BASE , which is pretrained on about 30B words, 1 and 3 RoBERTa BASE models with randomly initialized parameters. Descriptions of the five evaluation methods appear in the subsequent sections. 2 In each experiment, we test all 16 models on each task involved. To show the overall trend of improvement, we use non-linear least squares to fit an exponential learning curve to the results. 3 We upsample RoBERTa BASE results in regression in order to have an equal number of results for each data quantity. We use a four-parameter exponential learning curve used to capture diminishing improvement in performance as a function of the number of practice trials (Heathcote et al., 2000;Leibowitz et al., 2010): where E(P n ) is the expected performance after n trials, 4 P 0 and P ∞ and are the initial and asymptotic performance, and α and β are coefficients to translate and dilate the curve in the log domain. We plot the results in a figure for each task, where the y-axis is the score and the x-axis is the amount of pretraining data. 5 For some plots, we use min-max normalization to adjust the results into the range of [0, 1], where 0 and 1 are the inferred values of P 0 and P ∞ , respectively. 6 Classifier Probing We use the widely-adopted probing approach of Ettinger et al. (2016), Adi et al. (2017), and otherswhich we call classifier probing-to test the extent to which linguistic features like part-of-speech and coreference are encoded in the frozen model representations. We adopt the ten probing tasks in the 1 The miniBERTas' training data is randomly sampled from Wikipedia and Smashwords in a ratio of 3:1. These two datasets are what Devlin et al. (2019) use to pretrain BERT and represent a subset of the data used to pretrain RoBERTa. RoBERTaBASE's training data also includes of news and web data in addition to Wikipedia and Smashwords. Warstadt et al. ran pretraining 25 times with varying hyperparameter values and model sizes for the 1M-, 10M-, and 100M-word settings, and 10 times for the 1B-word setting. All the models were pretrained with early stopping on validation set perplexity. For each dataset size, they released the three models with the lowest validation set perplexity, yielding 12 models in total. 2 Code: https://github.com/nyu-mll/ pretraining-learning-curves 3 We use SciPy's curve fit implementation. 4 In our case, a trial is one word of pretraining. 5 We plot the no-pretraining random baseline with an xvalue of 1. 6 The unnormalized results are included in the appendix. In each subplot we also plot the overall edge-probing performance, which we calculate for each MiniBERTa as its average F1 score on the 10 edgeprobing tasks (after normalization). For context, we also plot BERT LARGE performance for each task as reported by Tenney et al. (2019a). edge probing suite (Tenney et al., 2019b). 7 Classifier probing has recently come under scrutiny. Hewitt and Liang (2019) and Voita and Titov (2020) caution that the results depend on the complexity of the probe, and so do not precisely reveal the quality of the representations. However, 7 Task data sources: Part-of-Speech, Constituents, Entities, SRL, and OntoNotes coref. from Weischedel et al. (2013) we see two advantages to this method: First, the downstream classifier setting and F1 evaluation metric make these experiments easier to interpret in the context of earlier results than results from relatively novel probing metrics like minimum description length. Second, we focus on relative differences between models rather than absolute performance, and include a randomly initialized baseline model in the comparison. When the model representations are random, the probe's performance reflects the probe's own ability to solve the target task. Therefore, any improvements over this baseline value are due to the representation rather than the probe itself. Task formulation and training Following Tenney et al., we use attention pooling to generate representation(s) of the token span(s) involved in the task and train an MLP that predicts whether a given label correctly describes the input span(s). We adopt the "mix" representation approach described in the paper. To train the probes, we use the same hyperparameters used in Tenney et al. and tune the batch size and learning rate. 8 Results We plot results in Figure 2. From the single-task curves we conclude that most of the feature learning occurs with <100M words of pretraining data. Based on the best-fit curve, we can estimate that 90% of the attainable improvements in overall performance are achieved with <20M words. Most plots show broadly similar learning curves, which rise sharply with less than 1M words of pretraining data, reach the point of fastest growth (in the log domain) around 1M words, and are nearly saturated with 100M words. The most notable exception to this pattern is the Winograd task, which only rises significantly between 1B and 30B words of pretraining data. 9 As the Winograd task is designed to test commonsense knowledge and reasoning, the results suggest that these features require more data to encode than syntactic and semantic ones, with the caveat that the dataset is smaller than the other edge probing tasks, and results on Winograd tasks are highly sensitive to factors such as task formulation (Liu et al., 2020). We observe some general differences between different types of tasks. Figure 3 shows the aggregated learning curves of syntactic, semantic, and commonsense tasks. The syntactic learning curve rises slightly earlier than the semantic one and 90% of the improvements in syntactic learning can be made with about 10M words, while the semantic curve still rises slightly after 100M. This is not surprising, as semantic computation is generally thought to depend on syntactic representa- Minimum Description Length Probing In this experiment, we study the MiniBERTas with MDL probing (Voita and Titov, 2020), with the goal of revealing not only the total amount of feature information extracted by the probe, but also the effort taken by the probe to extract the features. MDL measures the minimum number of bits needed to transmit the labels for a given task given that both the sender and the receiver have access to the pretrained model's encoding of the data. A well-trained decoder model can help extract labels from the representations and thus reduce the number of bits needed to transmit the labels. Since the model itself will also need to be transmitted, the total description length is a sum of two terms: The data codelength is the number of bits needed to transmit the labels assuming the receiver has the trained decoder model, i.e. the cross-entropy loss of the decoder. The model codelength is the number of bits needed to transmit the decoder parameters. We follow Voita and Titov's online code estimation of MDL, where the decoder is implicitly transmitted. As in Section 3, we train decoders using the same hyperparameter settings and task definitions as Tenney et al. (2019b). 10 Results We plot the online code results in Figure 4. The overall codelength shows a similar trend to edge probing: Most of the reduction in feature codelength is achieved with fewer than 100M words. MDL for syntactic features decreases even sooner. Results for Winograd are idiosyncratic, probably due to the failure of the probes to learn the task. The changes in model codelength and data codelength are shown on the bar plots in Figure 4. We compute the data codelength following Voita and Titov (2020) using the training set loss of a classifier trained on the entire training set, and the model codelength is the total codelength minus the data codelength. The monotonically decreasing data codelength simply reflects the fact that the more data rich RoBERTa models have smaller loss. When it comes to the model codelength, however, we generally observe the global minimum for the randomly initialized models (i.e., at "None"). This is expected, and intuitively reflects the fact that a decoder trained on random representations would provide little information about the labels, and so it would be optimal to transmit a very simple decoder. On many tasks, the model codelength starts to decrease when the pretraining data volume exceeds a certain amount. However, this trend is not consistent across tasks and the effect is relatively small. Unsupervised Grammaticality Judgement We use the BLiMP benchmark (Warstadt et al., 2020a) to test models' knowledge of individual grammatical phenomena in English. BLiMP is a challenge set of 67 tasks, each containing 1000 minimal pairs of sentences that highlight a particular morphological, syntactic, or semantic phenomena. Minimal pairs in BLiMP consist of two sentences that differ only by a single edit, but contrast in grammatical acceptability. A language model classifies a minimal pair correctly if it assigns a higher probability to the acceptable sentence. Since RoBERTa is a masked language model (MLM), we measure pseudo log-likelihood (Wang and Cho, 2019) to score sentences (Salazar et al., 2020). Results We plot learning curves for BLiMP in Figure 5. Warstadt et al. organize the 67 tasks in BLiMP into 12 categories based on the phenomena tested and for each category we plot the average accuracy for the tasks in the category. We do not normalize results in this plot. For the no-data baseline, we plot chance accuracy of 50% rather than making empirical measurements from random RoBERTa models. We find the greatest improvement in overall BLiMP performance between 1M and 100M words of pretraining data. With 100M words, sensitivity to contrasts in acceptability overall is within 9 accuracy points of humans, and improves only 6 points with additional data. This shows that substantial knowledge of many grammatical phenomena can be acquired from 100M words of raw text. We also observe significant variation in how much data is needed to learn different phenomena. We see the steepest learning curves on agreement phenomena, with nearly all improvements occurring between 1M and 10M words. For phenomena involving wh-dependencies, i.e. filler-gap dependencies and island effects, we observe shallow and delayed learning curves with 90% of possible improvements occurring between 1M and 100M words. The relative difficulty of wh-dependencies can probably be ascribed to the long-distance nature and lower frequency of those phenomena. We also observe that the phenomena tested in the quantifiers category are never effectively learned, even by RoBERTa BASE . These phenomena include subtle semantic contrasts-for example Nobody ate {more than, *at least} two cookies-which may involve difficult-to-learn pragmatic knowledge (Cohen and Krifka, 2014 (Speer and Havasi, 2012), and SQUAD (Rajpurkar et al., 2016). The Google-RE and T-REx tasks are each divided into three sub-tasks. Results We plot the results on LAMA in Figure 6. data may be needed for the model to be exposed to relevant factual knowledge. The learning curves for many LAMA tasks do not show clear signs of saturation in the range of 0 to 30B words, suggesting further improvements are likely with much larger data quantities. Among LAMA tasks, Concept-Net most directly tests commonsense knowledge. The steep slope of the ConceptNet curve between 100M and 30B words of pretraining data and the large precision jump (> 0.05) from 1B to 30B show that increasing the pretraining data to over 1B words significantly improve the LM's commonsense knowledge, which explains the shape of the Winograd coref. learning curve in Section 3. Fine-tuning on NLU Tasks SuperGLUE is a benchmark suite of eight classification-based language-understanding tasks . We test each MiniBERTa on five SuperGLUE tasks on which we expect to see significant variation at these scales. 12 The hyperpa- rameter search range used for each task is described in the appendix. Results We plot the results on the selected Su-perGLUE tasks in Figure 7. Improvements in Su-perGLUE performance require a relatively large volume of pretraining data. For most tasks, the point of fastest improvement in our interpolated curve occurs with more than 1B words. None of the tasks (with the possible exception of Commitment-Bank) show any significant sign of saturation at 30B words. This suggests that some key NLU skills are not learnt with fewer than billions of words, and that models are likely to continue improving substantially on these tasks given 10 to 100 times more pretraining data. Figure 1 plots the overall learning curves for these five methods together. The most striking result is that good NLU task performance requires far more data than achieving good representations for linguistic features. Classifier probing, MDL probing, and acceptability judgment performance all improve rapidly between 1M and 10M words and show little improvement beyond 100M words, while performance on the NLU tasks in Super-GLUE appears to improve most rapidly with over 1B words and will likely continue improving at larger data scales. While the linguistic features we test are undoubtedly needed to robustly solve most NLU tasks, a model that can extract and encode a large proportion of these features may still perform poorly on SuperGLUE. What drives improvements in NLU task performance at larger data scales remains an open question. Discussion Factual knowledge may play a large role in explaining SuperGLUE performance. This hypothesis is backed up by results from the Winograd edge-probing task (Figure 2) and the LAMA tasks ( Figure 6), which suggest that most of the im-provements in the model's world and commonsense knowledge are made with over 100M words. However, the LAMA learning curve shows signs of slowing between 1B and 30B words, the Super-GLUE curve does not. Another possible explanation is that linguistic features encoded by a model may not be easily accessible during fine-turning. Warstadt et al. (2020b) found that RoBERTa can learn to reliably extract many linguistic features with little pretraining data, but requires billions of words of pretraining data before it uses those features preferentially when generalizing. In light of Warstadt et al.'s findings, we had initially hypothesized that feature accessibility as measured by MDL might show a shallower or later learning curve than standard classifier probing. 13 Our findings do not support this hypothesis: Figure 1 shows no substantial difference between the classifier probing MDL probing curves. However, we do not totally rule out the possibility that linguistic feature accessibility continues to improve with massive pretraining sets. There are potential modifications to Voita and Titov's approach that could more faithfully estimate feature accessibility. First, although RoBERTa is actually fine-tuned in most applications, we and Voita and Titov measure MDL taking the outputs of the frozen RoBERTa model as input to a trainable MLP decoder. It may be more relevant to measure MDL by fine-tuning the entire model (Lovering et al., 2021). Second, MDL actually estimates the information content of a particular dataset, rather than the feature itself. Whitney et al. (2020) propose an alternative to MDL that measures feature complexity in a way that does not depend on the size of the dataset. Related Work Probing neural network representations has been an active area of research in recent years (Belinkov and Glass, 2019;Rogers et al., 2020). With the advent of large pretrained Transformers like BERT (Devlin et al., 2019), numerous papers have used classifier probing methods to attempt to locate linguistic features in learned representations with striking positive results (Tenney et al., 2019b;Hewitt and Manning, 2019). However, another thread has found problems with many probing methods: Classifier probes can learn too much from training data (Hewitt and Liang, 2019) and can fail to distinguish features that are extractable from features that are actually used when generalizing on downstream tasks (Voita and Titov, 2020;Pimentel et al., 2020;Elazar et al., 2020). Moreover, different probing methods often yield contradictory results (Warstadt et al., 2019). There have also been a few earlier studies investigating the relationship between pretraining data volume and linguistic knowledge in language models. Studies of unsupervised acceptability judgments find fairly consistent evidence of rapid improvements in linguistic knowledge up to about 10M words of pretraining data, after which improvements slow down for most phenomena. van They measure RoBERTa's preference for linguistic features over surface features during fine-tuning on ambiguous classification tasks. Schijndel et al. (2019) find large improvements in knowledge of subject-verb agreement and reflexive binding up to 10M words, and little improvement between 10M and 80M words. Hu et al. (2020) find that GPT-2 trained on 42M words performs roughly as well on a syntax benchmark as a similar model trained on 100 times that amount. Other studies have investigated how one model's linguistic knowledge changes during the training process, as a function of the number of updates (Saphra and Lopez, 2019;Chiang et al., 2020). Raffel et al. (2020) also investigate how performance on SuperGLUE (and other downstream tasks) improves with pretraining dataset size between about 8M and 34B tokens. In contrast to our findings, they find that models with around 500M tokens of pretraining data can perform similarly on downstream tasks to models with 34B words. However, there are many differences in our settings that may lead to this divergence. For example, they pretrain for a fixed number of iterations (totaling 34B token updates), whereas the MiniBERTas we use were pretrained with early stopping. They also use prefix prompts in their task formulations, and adopt an encoder-decoder architecture and thus their model has roughly twice the number of parameters of the largest model we evaluate. There is also some recent work that investigates the effect of pretraining data size of other languages. Micheli et al. (2020) pretrain BERT-based language models on 10MB, 100MB, 500MB, 1GB, 2GB, and 4GB of French text and test them on a question answering task. They find that the French MLM pretrained on 100MB of raw text has similar performance to the ones pretrained on larger datasets on the task, and that corpus-specific selfsupervised learning does not make a significant difference. Martin et al. (2020) also show that French MLMs can already learn a lot from small-scale pretraining. Concurrent work (Liu et al., 2021) probes RoBERTa models pretrained on different numbers of iterations using a set of probing tasks similar to ours. They find that linguistic abilities are acquired fastest, world and commonsense knowledge learning takes more iterations, and reasoning abilities are never stably acquired. Both studies show that linguistic knowledge is easier to learn than factual knowledge. Conclusion We track several aspects of RoBERTa's ability as pretraining data increases. We find that ability in syntax and semantics largely saturates after only 10M to 100M words of pretraining data-on par with the data available to human learners-while learning factual knowledge requires much more data. We also find that scaling pretraining data size past billions of words significantly improves the NLU performance, though we cannot fully explain what abilities drive this improvement. Answering this question could be a stepping stone to more data-efficient models. Acknowledgments This material is based upon work supported by the National Science Foundation under grant no. 1850208. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. We would like to thank Udit Arora, Jason Phang, Clara Vania, and ML 2 for feedback on an earlier draft. Thanks also to Kyunghyun Cho, Tal Linzen, Grusha Prasad, and Emin Orhan for suggestions regarding the exponential learning curve, and to Elena Voita, Ian Tenney, and Haokun Liu for the discussion about the implementation of the probing methods. Ethical Considerations There are several ethical reasons to study LMs with limited pretraining data. Training massive LMs like RoBERTa from scratch comes with non-trivial environmental costs (Strubell et al., 2019), and they are expensive to train, limiting contributions to pretraining research from scientists in lower-resource contexts. By evaluating LMs with limited pretraining, we demonstrate that smaller LMs match massive ones in performance in many respects. We also identify a clear gap in our knowledge regarding why extensive pretraining is effective. Answering this question could lead to more efficient pretraining and ultimately reduce environmental costs and make NLP more accessible. On the other hand, there is a danger that our work, by projecting substantial gains in model performance by increasing pretraining size, could legitimize and encourage the trend of ever growing datasets. Massive LMs also replicate social biases present in training data (Nangia et al., 2020). By establish-ing benchmarks for smaller LMs and highlighting their efficacy for certain purposes, we hope to spur future work that takes advantage of smaller pretraining datasets to carefully curate the data distribution, as advocated by Bender et al. (2021), in order to build LMs that do less to reproduce harmful biases and are more inclusive of minority dialects.
5,877.2
2020-11-10T00:00:00.000
[ "Computer Science" ]
BKCa Mediates Dysfunction in High Glucose Induced Mesangial Cell Injury via TGF-β1/Smad2/3 Signaling Pathways Objective To explore the role and mechanism of BKCa in diabetic kidney disease. Methods Rat mesangial cells (MCs) HBZY-1 were cultured with high glucose to simulate the high-glucose environment of diabetic kidney disease in vivo. The effects of large conductance calcium-activated potassium channel (BKCa) on proliferation, migration, and apoptosis of HBZY-1 cells were observed. The contents of transforming growth factor beta 1 (TGF-β1), Smad2/3, collagen IV (Col IV), and fibronectin (FN) in the extracellular matrix were also observed. Results High glucose significantly damaged HBZY-1 cells, which enhanced the ability of cell proliferation, migration, and apoptosis, and increased the secretion of Col IV and FN. Inhibition of BKCa and TGF-β1/Smad2/3 signaling pathways can inhibit the proliferation, migration, and apoptosis of HBZY-1 cells and suppress the secretion of Col IV and FN. The effect of excitation is the opposite. Conclusions BKCa regulates mesangial cell proliferation, migration, apoptosis, and secretion of Col IV and FN and is associated with TGF-β1/Smad2/3 signaling pathway. Introduction e diagnosis of type 2 diabetes mellitus (T2DM), the majority of diabetes mellitus (DM), is often accompanied by chronic microvascular or macrovascular complications with high economic and social costs, among which are diabetic kidney disease (DKD), retinopathy, peripheral blood vessels, and coronary atherosclerosis. About 30% of DM patients suffer from DKD, the leading cause of endstage kidney disease (ESKD) and even premature death [1]. Glomerulus is currently regarded as the main site of lesions of DKD with the main pathological features being diffuse mesangial matrix dilatation, exudative lesions and segmental nodular sclerosis in the glomeruli [2], accumulation of the extracellular matrix (ECM) in the glomerulus and tubulointerstitial septal compartment [3], and thickening and transparency of the intrarenal vascular system [4]. Glomerular mesangial cells (MCs) are involved in many physiological activities, such as the production of growth factors, the formation of glomerular mesangial matrix as the structural support of capillaries, and the regulation of glomerular hemodynamics by contractile properties [5]. When the glomerulus is damaged, MCs often change their phenotype to myofibroblasts expressing α-smooth muscle actin or interstitial collagen besides normal matrix components, which is a key link of glomerulosclerosis [6]. Despite many strategies proven effective, including control of blood sugar and blood pressure and inhibition of the renin-angiotensin-aldosterone system (RAAS), the number of DM patients who eventually develop DKD is still large [7]. erefore, it is still vital to find new therapeutic targets for preventing and delaying the progress of DKD. BK Ca is considered to be a key participant in many physiological functions, including regulating neuronal discharge [8], smoothing muscle tension [9], promoting endocrine cell secretion [10], cell proliferation and migration [11], etc. It also participates in a series of diseases, such as hypertension, epilepsy, cancer, and so on [12]. Recently, BK channel subtypes were also found in glomerular podocytes and mesangial cells [6,13]. Our previous studies found that BK Ca channel expression was downregulated and the current density of the BK Ca channel was downregulated in diabetic coronary artery smooth muscle cells [14]. TGF-β1 is a key regulator of ECM synthesis and degradation in diabetic nephropathy [15], which promotes renal fibrosis by upregulating the gene encoding ECM proteins and enhancing the production of ECM degrading enzyme inhibitors to inhibit its degradation [16]. TGF-β1 first binds to the membrane TGF-βII receptor and complexes are transphosphorylated to reactivate type I receptor, and then it activates Smad signaling pathway and regulates the transcription of TGF-β1 target genes, such as Col IV and FN, through phosphorylation of Smad 2/3 and translocation into the nucleus [17,18]. At present, no research has focused on the relationship between BK Ca and MCs. However, BK Ca participates in cell proliferation and migration and can regulate the secretion of endocrine cells; MCs can secrete mesangial matrix, transform the phenotype into fibroblasts under stress, and participate in glomerulosclerosis. Due to the similar function between the two, it is worth further exploring their internal relationship. In this study, HBZY-1 cells were cultured with high glucose to establish the DKD cell model. e effects of inhibiting or activating BK Ca on the proliferation, migration, and apoptosis of HBZY-1 cells, as well as the changes of TGF-β1, Smad2/3, and ECM (Col IV, FN) were observed to explore the role of BK Ca in DKD and its mechanism, so as to provide new ideas for the clinical treatment and drug development of DKD. Mesangial Cell Proliferation. MTT was used to measure the cell survival rate to calculate the proliferation. e cells were counted and the density was adjusted to 5 × 10 4 and then inoculated into 96-well plates at 37°C in a 5% CO 2 incubator for 24 hours. en corresponding intervention drugs were added according to the experimental design; 5 ml MTTsolution was added to each hole, and then the cells were cultured for 4 hours. e supernatant should be absorbed and discarded, and 150 μL of DMSO was added to each well. ey were shaken for 10 minutes. After crystallization was completely dissolved, the OD value (absorbance) at 490 nm wavelength of each pore was detected by using enzyme labeling instrument. e cell survival rate was calculated by the following equation: (OD administration − OD blank)/(OD control − OD blank) * 100%. HBZY-1 Cell Migration. ree horizontal lines were first drawn evenly in each hole back of the 6-well plate, and then HBZY-1 cells were cultured with a density of 12 × 10 4 at 37°C in a 5% CO 2 incubator for 24 hours. en, 3 scratches were made in the cells evenly in each hole perpendicular to the previous horizontal line with the tip of the pipette and continually cultured for grouping intervention; photographs were taken at 0 and 48 hours, and the scratch area was calculated by image J software. Use ImageJ software to open the picture to be analyzed. Under the Process project, click Enhance Contrast, select Normalize, adjust the Saturated pixels to 0.3%, and then select Smooth and Find Edges under the Process project. After that, select Adjust-reshold under the Image item, select Red, and adjust the threshold to 0−20. Finally, use the magic wand tool to select the black scratches and select Measure under the Analyze project to get the scratch area. Wound healing percentage = initial scratch area/scratch area at a point in time. Hoechst Dyeing Experiment. HBZY-1 cells were cultured into 6-well plates at 8 × 10 4 density at 37°C in a 5% CO 2 incubator for 24 hours. e cells were intervened by adding drugs in groups after 48 hours, the supernatant was discarded, and 4% polyformaldehyde fixed solution and Hoechst 33258 staining solution were added to each hole, respectively. After discarding the staining solution, the cell staining was observed under an inverted fluorescence microscope and then photographed, recorded, and analyzed. Annexin V-FITC/PI Double-Staining Experiment. HBZY-1 cells were planted into 6-well plates at 12 × 104 density. e cells were cultured at 37°C in a 5% CO 2 incubator for 24 hours, then treated in groups, and cultured for 48 hours to be collected. A small amount of 1 × binding buffer was added to adjust cell density with 10 × 10 5 . After labeling, 100 ml cell suspension was added to each tube, and 5 ml Annexin V-FITC and 5 ml PI staining solution were added to each tube and then detected by a flow cytometer. e results were analyzed by Flowjo software. Expression of BK Ca -α, β, Col IV, and FN Protein in HBZY-1Cells. Proteins extracted from the HBZY-1 cells were analyzed by Western blotting. Equal amounts of protein (about 50 μg) were subjected to SDS-PAGE and transferred to a PVDF membrane, then blocked in 5% skimmed milk for 2 hours at room temperature, and then incubated overnight at 4°C with the following primary antibodies according to the instructions: anti-BK Ca -α, anti-BK Ca -β, anti-Col IV, anti-FN, and anti-β-actin (Abcam Biotechnology, USA). e membranes were then incubated with HRP-conjugated secondary anti-mouse antibody (Abcam Biotechnology, USA). Protein bands on the membrane were visualized by ECL (electrochemiluminescence) ( ermoScientific, Rockford, IL, USA) and quantitated using QuantityOne software (Bio-Rad, Richmond, CA, USA). Image J software was used to open the strip picture to get the gray statistics of the selected area. Detection of TGF-β1 and Smad2/3 in Cell Supernatant by ELISA. HBZY-1 cells were inoculated into 6-well plates at 12 × 10 4 density at 37°C in a 5% CO 2 incubator for 24 hours and then cultured for 48 hours after intervention by grouping administration. Cell culture medium was absorbed and centrifuged at 4°C and 8000 rpm/min for 15 minutes. Only the supernatant was taken and operated according to ELISA guidelines. Statistical Analysis. e experimental data were analyzed with SPSS 19.0 software (IBM, Armonk, NY, USA). e data were expressed as means ± standard deviations. Ttest analysis was used for comparisons between two groups. One-way ANOVA was used for comparisons among multiple groups. P < 0.05 indicated that the difference was statistically significant. e Results of BK Ca -siRNA Transfection and NS11021 and Tet Pre-Experimental Concentration Selection. BK Ca -siRNA was successful transfection (seen Supplementary Figure 1), and finally BK Ca -α-1188 was selected to continue the subsequent experiment for the gene inhibition group (Supplementary Figure 2). In order to select the optimum concentration of NS11021 and Tet, a series of experiments were conducted to confirm the concentrations of NS11021 and Tet as 10 μM in this study. e detailed results are given in Supplementary Figure 3. Inhibition of BK Ca Can Reduce the Apoptosis of Mesangial Cells. In the Hoechst staining experiment, the fluorescence intensity of the NG group, Tet group, BK Ca -siRNA group, SB431542 group, and NS11021 + SB431542 group was lower, most of the cell nucleus and cytoplasm were light blue, and the fluorescence expression of chromatin was uniform. e fluorescence intensity of the NG group was the lowest, and the number of apoptotic cells was the lowest (Figures 3(a), 3(d), 3(e), 3(g), and 3(h)). e fluorescence intensities of the HG group, NS11021 group, TGF-β1 group, and Tet + TGF-β1 group were high, some cell nucleus were concentrated and fragmented, and the whole staining fluorescence showed blue-white dense staining (Figures 3(b), 3(c), 3(f ), and 3(i)). ese results suggest that inhibiting BK Ca can reduce the apoptosis of glomerular mesangial cells. International Journal of Endocrinology Flow cytometry was used to further detect the apoptotic status of each intervention group. Compared with the NG group, the apoptotic rate of the HG group increased from 7.6% to 22.1% after 48 hours of intervention (P < 0.01), indicating that apoptotic rate increased (Figures 4(a), 4(b), and 4(j)). Compared with the HG group, the apoptotic rate (Figures 4(g) and 4(j)). e apoptotic rate of NS11021 + SB431542 cells decreased (15.7%, P < 0.01) (Figures 4(h) and 4(j)). e apoptotic rate of Tet + TGF-β1 cells increased (27.7%, P < 0.01) (Figure 4(i)). Inhibition of BK Ca Can Inhibit the Expression of Col IV and FN Protein in Glomerular Mesangial Cells. Compared with the NG group, the expression of BK Ca -α and β protein in the HG group increased 48 hours after intervention (P < 0.01). Compared with the HG group, the expression of BK Ca -α in NS11021 cells was insignificant (P < 0.05), while the expression of BK Ca -β in NS11021 cells increased (P < 0.01). Tet decreased the expression of BK Ca -α and β in NS11021 cells (P < 0.05), and BK Ca -siRNA decreased the expression of BK Ca -α and β in NS11021 cells (P < 0.01) ( Figures 5(a)-5(c)). Compared with SB431542, NS11021 + SB431542, and Tet + TGF-β1, the expression of BK Ca -α and β protein in cells was significantly different (P < 0.01) (Figures 5(d)-5(f )). Compared with the NG group, the expression of Col IV and FN in the HG group increased 48 hours after intervention (P < 0.01). Compared with the HG group, there was no significant difference in the expression of Col IV and FN in NS11021 cells (P < 0.05), Tet decreased the expression of Col IV and FN in NS11021 cells (P < 0.01), and BK Ca -siRNA decreased the expression of Col IV and FN in NS11021 cells (P < 0.01) (Figures 6(a)-6(c)). Compared with SB431542, NS11021 + SB431542, and Tet + TGF -β1, the expression of Col IV and FN in cells was significantly different (P < 0.01) (Figures 6(d)-6(f )). Inhibition of BK Ca Can Inhibit the Expression of TGF-β1 and Smad2/3 in Supernatant of Mesangial Cells. Compared with the NG group, the expression of TGF-β1, and Smad2/3 increased in the HG group (P < 0.01). Compared with the HG group, the expression of Smad2/3 increased in the NS11021 group (P < 0.05), but the expression of TGF-β1 did not change significantly (P < 0.05); the Figure 5: e effects of each group on the expression of BKCa-α and β protein were detected by Western blotting. * P < 0.05, * * P < 0.01 vs. NG group; # P < 0.05, ## P < 0.01 vs. HG group; △△ P < 0.01. n � 6. Discussion DKD is a common complication of DM, which is one of the three diseases with the highest incidence at present. Current research shows that the core of DKD is glomeruli [5]. MCs are the most active intrinsic cells in the glomerulus. ey can be regulated by some cytokines to abnormally grow and secrete extracellular matrix when the glomerulus is damaged. ey are important pathogenic factors leading to glomerulosclerosis [22]. As the most active intrinsic cell in the glomerulus, MCs can hypertrophy and proliferate in the early stage of DKD and further induce multidirectional signal pathways to affect renal function. e glomerular mesangial dilatation not only induces the secretion and deposition of macromolecule substances such as ECM but also squeezes the glomerular capillaries to make them narrow or even blocked and promotes the release of inflammatory mediators such as DKD-related cytokines and growth factors [5,[18][19][20][21][22][23]. Glomerular lesions of DKD include proliferation and hypertrophy of MCs, excessive accumulation of ECM in the form of abnormally increased mesangial matrix, and thickening of glomerular basement membrane, which eventually leads to nodular glomerulosclerosis [24,25]. us, abnormal proliferation, migration, apoptosis of MCs, and abnormal accumulation of ECM play a key role in the process of glomerular lesion. e results of cell proliferation, scratch, and apoptosis in this study showed that NS11021 could significantly promote the proliferation, migration, and apoptosis of HBZY-1 cells compared with the HG group. On the contrary, Tet and BK Ca -siRNA could significantly inhibit the proliferation, migration, and apoptosis of HBZY-1 cells. ese results suggest that the proliferation, migration, and apoptosis of HBZY-1 cells can be inhibited by inhibiting BK Ca , thus improving glomerular lesions. It should be noted that the cell proliferation and Figure 6: e effects of each group on the expression of collagen IV and fibronectin were detected by Western blotting. * P < 0.05, * * P < 0.01 vs. NG group; # P < 0.05, ## P < 0.01 vs. HG group; △△ P < 0.01.n � 6. apoptotic rate in the HG group increased at the same time, which is different from that in most experiments. It suggests that high glucose has a two-way effect of nutrition and damage to cells and can promote the proliferation and apoptosis of HBZY-1 cells at the same time. TGF-β is one of the central factors in the occurrence and development of DKD. It can induce the proliferation of mesangial cells and the secretion of collagen and even directly induce the synthesis of ECM such as Col and FN from transcriptional level to promote the process of renal fibrosis [26]. At present, three subtypes of TGF-β1, 2, and 3 have been identified in mammals [27], among which TGF-β1 is most closely related to kidney [28]. Smad2 and 3, as the main downstream effector proteins of TGF-β signaling pathway, can bind to Smad4 after phosphorylation and then transfect into nuclear regulatory gene transcription. Among them, Smad3 plays a more important role and is recognized as a fibrogenic factor [29]. Conclusion TGF-β1 has similar effects to NS11021, which can promote the proliferation, migration, and apoptosis of HBZY-1 cells and promote the secretion of Col IV and FN in ECM. And SB431542 is similar to Tet and BK Ca -siRNA, which suggests that BK Ca may be related to TGF-β1/Smad2/3 pathway. Further experiments showed that NS11021 + SB431542 could inhibit the proliferation, migration, and apoptosis of HBZY-1 cells and the secretion of Col IV and FN in ECM. On the contrary, Tet + TGF-β1 could promote the proliferation, migration, and apoptosis of HBZY-1 cells and the secretion of Col IV and FN in ECM, suggesting that TGF-β1/ Smad2/3 and BK Ca regulate the proliferation, migration, and apoptosis of HBZY-1 cells and the secretion of Col IV and FN are the same signaling pathway. TGF-β1/Smad2/3 is downstream of BK Ca . rough the development of specific agonists or blockers of BK Ca to regulate the expression and function of BK Ca , it may provide a new therapeutic direction for DKD. Data Availability e datasets used and/or analyzed in the study are available from the corresponding author on reasonable request. Conflicts of Interest e authors declare no conflicts of interest.
4,047.4
2020-04-29T00:00:00.000
[ "Biology", "Medicine" ]
Filtering-Based Parameter Identification Methods for Multivariable Stochastic Systems This paper presents an adaptive filtering-based maximum likelihood multi-innovation extended stochastic gradient algorithm to identify multivariable equation-error systems with colored noises. The data filtering and model decomposition techniques are used to simplify the structure of the considered system, in which a predefined filter is utilized to filter the observed data, and the multivariable system is turned into several subsystems whose parameters appear in the vectors. By introducing the multi-innovation identification theory to the stochastic gradient method, this study produces improved performances. The simulation numerical results indicate that the proposed algorithm can generate more accurate parameter estimates than the filtering-based maximum likelihood recursive extended stochastic gradient algorithm. Introduction System identification is the theory and methods of establishing the mathematical models of dynamical systems [1][2][3][4][5] and some identification approaches have been proposed for scalar systems and multivariable systems [6][7][8][9][10][11]. Multivariable systems exist more widely in modern large-scale industrial processes, multivariable systems can more accurately describe the characteristics of dynamic processes, and have extensive application prospects to study the identification methods of multivariable systems [12][13][14]. The identification methods of multivariable systems can be regarded as an extension of those of scalar systems [15,16]. Therefore, how to identify the multivariable systems by extending the identification methods of scalar systems has attracted much attention. This paper focuses on the identification issues of multivariable systems with complex structures and many parameters. For decades, many parameter estimation methods have been developed for multivariable systems, such as the stochastic gradient methods [17], the iterative methods [18], the recursive least-squares methods [19,20] and the blind identification methods [21]. The maximum likelihood algorithm has good statistical properties and can deal with colored noises directly [22][23][24]. The present study aims to investigate a more efficient algorithm based on the maximum likelihood principle, the negative gradient search, the data filtering, and the multi-innovation identification theory. The complex structures and high dimensions in the parameter matrices of the multivariable systems lead to the increase in computational complexity [25][26][27]. Inspired by the hierarchical control based on the decomposition-coordination principle for large-scale systems, the hierarchical identification can be served as the solution to reduce the computational intensity by decomposing the identification model into several subsystems with smaller dimension and fewer variables [28]. Differing from the hierarchical identification [29], the model decomposition technique, which is based on the matrix row and column multiplication expansion, is an effective method to reduce the computational burden. Recently, the model decomposition technique are used in [30,31] to reduce the computational complexity by transforming the multivariable system into several small-scale subsystems with only the parameter vectors to be determined. By changing the noise model structure of the subsystem to whiten the colored noise, an adaptive filter is designed to filter the observed data, then the subsystem identification model is further simplified and the parameter estimation accuracy is improved [32][33][34]. For ARX models with unmeasurable outputs, a modified Kalman filter was designed and a new multi-step-length formulation was derived to improve the performance of the gradient iterative algorithm [35]. The advantage of the stochastic gradient methods is that they need less computational effort compared to existing identification methods [36,37]. Due to their zigzagging behavior, the stochastic gradient methods have slow convergence rates [38,39]. The focus of this paper is to investigate a new method with computational efficiency by introducing the multi-innovation identification theory into the stochastic gradient method. The innovation is the useful information that can improve the accuracy of parameter estimation or state estimation. From the viewpoint of innovation modification, the multi-innovation identification theory improves the convergence rate and parameter estimation accuracy from the following two aspects [40,41]. Firstly, the multi-innovation method uses not only the current data but also the past data in each recursive calculation step, which is the reason to improve the convergence rate. Secondly, the multi-innovation method repeatedly utilizes the available data in the neighboring two recursions, which is the reason to improve the parameter estimation accuracy. In this aspect, multi-innovation methods have been developed in [42,43]. It is well known that an increasing innovation length leads to better parameter estimation accuracy, but the price paid is a large computational effort [44,45]. The difficulty arises as to how to choose the innovation length. In summary, although a filtering and maximum likelihood-based recursive least-squares algorithm is available for multivariable systems with complex structures and colored noises [32], there remains a need for enhancing the parameter estimation accuracy with computational efficiency. Motivated by these considerations, this paper has the following contributions: • The data filtering and model decomposition techniques are used to reduce the computational complexity of the multivariable systems contaminated by uncertain disturbances. • A filtering-based multivariable maximum likelihood multi-innovation extended stochastic gradient (F-M-ML-MIESG) algorithm is proposed for improved parameter estimation accuracy while retaining desired computational performance. X := A: X is defined by A. k: The time variable. Consider the following multivariable controlled autoregressive autoregressive moving average (M-CARARMA) model: where y k ∈ R m and u k ∈ R r are the output and input vectors, respectively, v k ∈ R m denotes random white noise vector with zero mean and variance σ 2 . The polynomials A(q), Q(q), C(q), and D(q) are expressed as Assume that y(k) = 0, u(k) = 0, and v(k) = 0 for k 0, the orders n a , n b , n c , and n d are known. Differing from the work in [32], the focus of this paper is to derive a new method to identify the polynomial coefficients A l , Q l , c l , and d l . Referring to the work in [32], in order to reduce the computational complexity, Equation (1) is decomposed into several subsystems. Then, the ith subsystem can be represented as Define From (2), it follows that Multiplying both sides of the above equation by C(q) gives That is, Then, the subsystem identification model can be expressed as Define an intermediate variable or From (4), it follows that The flowchart of computing the estimatesθ i1,k ,ĉ i,k andΘ k by the F-M-ML-RESG algorithm in (9)-(30) is shown in Figure 1. Obtain the estimateΘ k ? End The F-M-ML-MIESG Algorithm In order to further enhance the parameter estimation accuracy of the F-M-ML-RESG method, by introducing the multi-innovation identification theory, an F-M-ML-MIESG method is investigated. Define the information matrixΓ i1 (p, k), the filtered information matrixΓ i1f (p, k), and the stacked output vectorŶ i1 (p, k) asΓ where p is the innovation length. Define the stacked output vectorŴ i (p, k), the information matrix Γ ic (p, k), and the information matrixΓ id (p, k) aŝ Γ ic (p, k) := [φ ic,k ,φ ic,k−1 , · · · ,φ ic,k−p+1 ] ∈ R n c ×p , The F-M-ML-MIESG Algorithm In order to further enhance the parameter estimation accuracy of the F-M-ML-RESG method, by introducing the multi-innovation identification theory, an F-M-ML-MIESG method is investigated. Define the information matrixΓ i1 (p, k), the filtered information matrixΓ i1f (p, k), and the stacked output vectorŶ i1 (p, k) asΓ where p is the innovation length. Define the stacked output vectorŴ i (p, k), the information matrix Γ ic (p, k), and the information matrixΓ id (p, k) aŝ Γ ic (p, k) := [φ ic,k ,φ ic,k−1 , · · · ,φ ic,k−p+1 ] ∈ R n c ×p , Referring to the work in [18,42,43], Equation (10) becomes the following equation: Equation (22) can be reformulated into the following equation: Based on the F-M-ML-RESG method in (9)-(30), the F-M-ML-MIESG method can be obtained as follows:θ The F-M-ML-RESG method is a special case of the F-M-ML-MIESG method because, when p = 1, the F-M-ML-MIESG method degenerates into the F-M-ML-RESG method. The proposed approaches in the paper can combine other estimation algorithms [46][47][48][49][50] to study the parameter identification problems of linear and nonlinear systems with different disturbances [51][52][53][54][55], and can be applied to other fields [56][57][58][59][60] such as signal processing and process control systems. The F-M-ML-MIESG method consists of the following steps for computingθ i1,k ,ĉ i,k andΘ k : The flowchart of computing the estimatesθ i1,k ,ĉ i,k andΘ k by the F-M-ML-MIESG algorithm in (31)-(58) is shown in Figure 2. The model decomposition technique is applied to solve the coupling relationship between the input and output variables of the multivariable system. Thus, the complexity of system identification algorithms is reduced. The data filtering technique is used to filter the observed data. Hence, the subsystem identification model is simplified. The proposed method is based on the data filtering technique, the coupling identification concept, the multi-innovation identification theory, and the negative gradient search for improved parameter estimation and computational performance. The maximum likelihood principle is utilized to estimate the parameters of the noise model directly. Start Collect u k and y k , computeŷ 1,k andû 1,k , Obtain the estimateΘ k ? End Examples Example 1. Consider the following M-CARARMA model Table 1, the parameter estimatesθ 1,k andθ 2,k versus k are shown in Figure 3. When σ 2 1 = σ 2 2 = 0.10 2 , the F-M-ML-MIESG parameter estimation errors versus k with different innovation lengths p are shown in Figure 4. When p = 9, the F-M-ML-MIESG parameter estimation errors versus k with different noise variances are shown in Figure 5. Example 2. Consider the following another 3-input and 3-output system: The simulation conditions of this example are similar to those in Example 1. Applying the F-M-ML-MIESG algorithm to estimate the parameters of this example system, the simulation results are shown in Table 2, Figures 6 and 7. Conclusions This paper considers the parameter estimation of the linear M-CARARMA system with an ARMA noise. By means of an adaptive linear filter, the subsystem identification model is simplified, then an F-M-ML-MIESG method is discussed by introducing the multi-innovation identification theory to the stochastic gradient method. The purpose of an adaptive filter is to improve the parameter estimation accuracy by filtering the observed data without changing the relationship between input and output data. Both the model decomposition technique and the data filtering technique are used to reduce the system complexity, and the identification model is simplified. The simulation validation demonstrates that the F-M-ML-MIESG method provides a higher parameter estimation accuracy than the F-M-ML-RESG method when p ≥ 2. The proposed filtering-based parameter identification methods for multivariable stochastic systems in this paper can be extended to study the identification issues of other scalar and multivariable stochastic systems with colored noises [61][62][63][64][65][66] and can be applied to some engineering application systems [67][68][69][70][71][72][73] such as filtering, estimation, prediction [74][75][76][77][78][79][80][81], and so on.
2,476.6
2020-12-21T00:00:00.000
[ "Engineering", "Computer Science" ]
New Active Control Method Based on Using Multiactuators and Sensors Considering Uncertainty of Parameters New approach is presented for controlling the structural vibrations. The proposed active control method is based on structural dynamics theories in whichmultiactuators and sensors are utilized. Each actuator force is modeled as an equivalent viscous damper so that several lower vibration modes are damped critically. This subject is achieved by simple mathematical formulation. The proposed method does not depend on the type of dynamic load and it could be applied to control structures with multidegrees of freedom. For numerical verification of proposed method, several criterions such as maximum displacement, maximum kinetic energy, maximum drift, and time history of controlled force and displacement are evaluated in two, five, and seven-story shear buildings, subjected to the harmonic load, impact force, and the Elcentro base excitation. This study shows that the proposed method has suitable efficiency for reducing structural vibrations. Moreover, the uncertainty effect of different parameters is investigated here. Introduction Smart structures are systems that can teach and protect themselves against the external excitation such as wind and earthquake.Analyzing and designing of smart structures is based on set of sciences including materials science, applied mechanics, electronics, biomechanics, and structural dynamics.In this procedure, maintaining the structural performance against the external hazards is a very important issue called control system.Many studies have been performed in the field of structural control.These methods can be categorized into three groups, that is, passive, semiactive, and active procedures [1].Due to the simplicity, low cost of assembly and no need to the external power, the passive control systems are numerous.However, the constant control feature makes these systems fail during the earthquakes.In other words, these systems are designed to work only for a certain excitation and limited frequency bound. The passive control system tries to remove the kinetic energy from the structure.Because of the mentioned constraints in passive algorithms, active control is highly regarded systems to cope with the earthquake.These techniques have suitable efficiency in different excitation, so that they could exactly sense and adopt the structural vibrations.To achieve this goal, each active control method is constructed based on algorithm, which verifies its efficiency and accuracy.The application of such systems began in 1989.In these systems an external power source is required so that this applied force affects the structural equilibrium equation.This applied force may lead to instable vibrations if the active control algorithm is not suitable. Hence, the complexity, the calculations volume, the instability risk, and the uncertainty factor are some difficulties that arise from active control systems.It should be noted that good performance of active methods depends on some parameters such as the reliable algorithm and the suitable positions for both sensors and actuators [2][3][4][5]. There are several active control mechanism proposed by different researchers that deal with such subjects.In this way, Bayard and his coworkers present the D-optimal design principle which is chosen by the maximum determinant of Fisher information matrix as the criteria function [6].This paper 2 Advances in Civil Engineering simplifies the selected modes into a unitary form by a simple method, so that the suitable position of the piezoelectric elements is achieved.Moreover Kamada and his coworkers modeled a four-story building, which was controlled by piezoelectric actuators, utilizing different algorithms [7].Such study shows that floor accelerations could be reduced up to seventy percent.The subject of determining the suitable position of the piezoelectric actuators was investigated by some other researchers.For example, Han and Lee present a controllable Grammian matrix for the smart composite plates, in which the maximum eigenvalue is used as performance function [8].In this study, the genetic algorithm was utilized to find the effective locations of piezoelectric sensors and actuators.Moreover, Sadri and his coworkers presented some criteria for determining the optimal position of piezoelectric actuators using the controllability of modes [9].In the other research, Gao and his coworkers considered a vibration suppression problem so that the total radiated acoustic power or acoustic potential energy is minimized [10].They used genetic algorithm with immune diversity to evaluate the suitable positions of actuators.Simultaneously, Zhang and his coworkers invested a performance function based on maximizing the dissipated energy that arises from the control action [11].According to this study, a float-encoded genetic algorithm was presented which is capable of solving this optimization problem.Cao and his coworkers used the element sensitivities of singular values to identify the suitable positions of actuators, based on running an optimization process [12].By using topology optimization Kögl and Silva presented an approach to design the piezoelectric plates and shell actuators [13].In this method, the optimization problem consists of distributing the piezoelectric actuators in such a way as to achieve a maximum output displacement in a given direction at a given point of the structure.Moreover, an experimental study on piezoelectric actuators was performed by Sethi and Song [14].They controlled the vibrations of three-story frame, using a piezoelectric patch sensor and actuator.Also, they implemented pole placement modal mechanism to control all vibrations modes.Furthermore than using piezoelectric, there are other kinds of active control algorithms.For example, Song and his coworkers presented the active control mechanism for space truss, using a Lead zirconate Titanate stack actuator [15].The common active control algorithms have been listed in Table 1, followed by the main idea used in each method [16].Finally, the semiactive procedures are achieved based on modifying the passive control systems in combination with active mechanisms [17]. It should be noted that the common active control algorithms use the mathematical concepts such as the genetic algorithms, the Fuzzy logical approaches, the optimization techniques, and other mathematical theories.In these methods, the fundamental principles of structural dynamic which introduce the dynamic behavior of structures were ignored.For this reason, common active control schemes are consistent with structural behavior.The proposed method tries to solve this defect so that the new algorithm is achieved based on a well-known structural dynamics theory, that is, critical damping concept.Based on this theory and also using multiple actuators, a new method is presented here for active control of structures.For this purpose, some fundamental theories of structural dynamics are utilized so that the actuators are modeled as additional viscous dampers in dynamic equilibrium equation.This procedure leads to the actuators forces.Moreover proper positions of both actuators and sensors are determined by an innovate technique.Efficiency of the proposed control method is also evaluated by solving some numerical examples. The Proposed Active Control Concept Based on Multiactuators and Sensors Dynamic equilibrium equation of structure can be implemented with various methods such as the Hamilton principle [28]: where M, C, and S are mass, damping, and stiffness matrices of structure, respectively.Furthermore P and D are external force and the nodal displacement vectors, respectively.Also, super dots (⋅) denote differential with respect to time.In active control case, the dynamic equilibrium equation is incorporated into the following relationship: where F a is the equivalent actuator force vector which is generated from the active control mechanism.From the structural dynamics point of view, the structural vibrations damp in the lowest possible time if the structure behaves in the critical damping condition.This is the main concept used here for designing new active control procedure.To explain this idea, consider a multidegree of freedom structure i.e.q. In modal dynamic analysis, this structure has vibrations mode.For exposing the structure to behave in fully critical damping conditions, all vibration modes should be in critical damping conditions.From this point of view, actuators should be attached to structure which causes all elements of the actuator force vector, F a , to be nonzero.It is clear that high number of actuators increases the cost of control process which is not suitable.To prevent this difficulty, few numbers of actuators, that is, ( less than ) is used.Also, the effect of lower dynamic modes is more than higher modes.Therefore actuators forces are calculated so that lower modes oscillate in critical damping conditions.Based on the above discussion, the critical damping theory is utilized to determine the equivalent actuators forces.For this purpose, (2) is transformed to the modal space as follows: where , , and are mass, damping, and stiffness of ith modal coordinates, that is, , respectively.Also, is the number of degrees of freedom and is the ith mode shape vector of free vibration of the structure.If the mode's rank in (3) increases, its effect on dynamic response decreases.Therefore, the primary mode has the highest effect on the dynamic response compared with other vibrations modes. Active control method Main concept Linear optimal control Minimize the performance index (Yang 1975 [18]) Pole assignment technique Minimize the performance index (Abdel-Rohman and Leipholz 1978 [19]) Independent modal space control Minimize the modal control force (Meirovitch and Oz 1980 [20]) Instantaneous optimal control Control by minimizing the energy function at each instant of time (Yang et al. 1987 [21]) Bounded state control Keeping the range of responses allowed (Reinhorn et al. 1987 [22]) Nonlinear control Minimizing higher-order function (Wu et al. 1995 [23]) Generalized feedback Control force will be a function of a displacement, velocity, and acceleration (Yang et al. 1991 [24]) Sliding mode control (SMC) Creating the sliding surface (Yang et al. 1994 [25]) Time delay compensation Enter the time delay between the response and performance of control (Abdel-Rohman 1985 [26]) Database and rule base Active control using neural network and fuzzy logic (Tani et al. 1998 [27]) Genetic algorithm Active control using genetic algorithm (Akutagawa et al. 2004 [1]) This principle is utilized to obtain the equivalent actuators forces.Since there are actuators, attached to the structure, the modal equations could be written as follows: where is the th element of the th modal shape vector and is the equivalent actuator force attached to th degree of freedom.The actuators act as additional viscous dampers.This is a model used in the mathematical formulation of the proposed active control method.This model leads to an effective actuator's force which has suitable compatibility with structural behavior.It should be emphasized that modeling the actuator as an equivalent viscous damper does not mean that the actuator's force should be applied to the structure by viscous dampers.In other words, any device which produces such forces is suitable for using in the practical cases.Therefore, the proposed algorithm only presents/calculates the suitable value of the actuator's force in each time of dynamic analysis.Then, this force could be generated by any device of the power source such as piezoelectric.In the following, ( 4) is transformed to the following relationship: where * is the th equivalent coordinate damping which is formulated based on both natural structural damping and the effect of the actuators forces: If the equivalent damping coordinates are equal to the critical damping, the structural oscillations damp in the lowest time: where is the th natural frequency of the structure.It is clear that using actuators lead to unknown actuators forces which should be determined at each second.For this purpose, (7) presents a system of equations which could be solved at each analysis time.In the case of existing one actuator substituting ( 6) into ( 7) leads to the following result: Equation ( 8) is completely consistent with the results presented by Alamatian and Rezaeepazhand [29] in which the active control process is formulated based on using only one actuator.Therefore, the proposed method is much more general than existing methods, so that the structural vibrations could be controlled by several actuators.In the case of structural control with two actuators, (7) leads to the below set of equations: where and are two degrees of freedom which actuators are attached to them.By solving system (9), the two actuators forcesare obtained: Similar approach could be utilized to formulate the proposed active control method with three actuators.In this case, the Advances in Civil Engineering first three damping coordinates are equal to their corresponding critical values, which leads to the following system of equations: where , , and are the three degrees of freedom; actuators are attached to them.Based on proposed method the actuators forces can be updated at any second of analysis just by solving a system of equations.There are some unknown parameters in previous systems of equations, that is, ( 9) and ( 11), such as coordinate's velocities ( Ż ).Based on the structural dynamics theory, modal coordinate velocities depend on the nodal velocities.The coefficient of proportionality is elements of inverse modal shapematrix.Consider where inv is th element of the inverse modal shape matrix.To determine the modal velocities, it is necessary to determine both number of sensors and their locations.By increasing number of sensors, the accuracy of modal velocities increases.For example, if there are sensors attached to the structure, (12) can be written as follows: Here, , , . .., are degrees of freedoms, sensors attached to them.It is clear that the proposed method presents the actuator's forces by solving a set of simultaneous equations in each time step.The dimension of this set of equation is equal to the number of actuators.Since number of actuators, attached to the structure, is finite (for practical cases), the dimension of the obtained set of simultaneous equations is quite small so that it solves in a small amount of time, compared with other calculations.Therefore, the required time, spent for calculating the actuator's forces is negligible.In other words, the proposed algorithm could run in real-time fast enough so that it can be utilized for active control of the realistic structures. The Proper Actuator and Sensor Locations In this section, the suitable degrees of freedoms actuators and sensors could be attached to them are evaluated.Here fundamental structural dynamics theories are utilized.For example the first mode usually has the highest portion in dynamic response.Therefore, degrees of freedom with high effect in the first mode are suitable for attaching actuators. In other words, the proper degrees of freedom for actuator locations are those which their corresponding values in the first mode shape are higher than others.For determining the sensors locations, ( 13) is considered.The main criterion utilized for judgment about sensors locations is that reliable values of Ż are obtained.These quantities correspond to both nodal velocities and elements of −1 .The nodal velocities are sensed from the dynamic structural response and will be unknown.The only available parameters are nodal velocity coordinates and elements of inverse of modal shape matrix. The proper sensor locations are determined based on those quantities so that sensors are attached to the degrees of freedom with higher corresponding values in the first row of the inverse matrix of mode shape.For example, in the case of two sensors, they should be installed in degrees of freedom associated with the two highest elements in the first row of the inverse matrix of mode shape.Since there are two sensors in the smart structure, the first and second modal coordinate's velocity can be calculated as follows: where 1 and 2 are th element in the first and second row of the inverse modal shape matrix.Also, Ḋ and and Ḋ are the velocity of th and th degree of freedom which the sensor are attached to them. The Proposed Active Control Algorithm To verify the proposed new active control method, some numerical dynamic analyses are performed.For this purpose, the analysis time is divided into limited number of time steps.In each time step, numerical time integration scheme will be utilized to achieve structural responses.Here, Newmark method with linear acceleration is used.The main steps of proposed active control process are as follows.(f) Compute the displacements vector of the current time steps using Newmark method with linear acceleration [30]. (g) Calculate the structural velocity vector based on the Newmark method [30]. (h) Calculate the acceleration vector of th time step by solving the following linear system: (k) Print the results and end. Numerical Study To verify the validity of the proposed active control method, some numerical examples are presented.For this purpose, the suggested active control process is combined with numerical dynamic analysis methods, that is, Newmark integration using linear acceleration scheme. Two Degree of Freedom System . Figure 1 shows a linear two-DOF system, subjected to impact load that is, 2 defined as follows: where Δ is the time step.The procedures, which leads to the optimum sensor and actuators locations, are summarized in Table 2.By applying the proposed method, the second degree of freedom is the suitable location for attaching both the actuator and sensor, that is, S2-A2, due to its highest corresponding values of inv 12 = 0.8167 and 21 = 0.8594.After determining the sensor and actuators locations, the proposed active control method is applied to this structure.Figures 2 and 3 show the displacement time responses of the first and second degrees of freedom for different control process, respectively.It is clear that all control methods are stable.On the other hand, the proposed method reduces the vibrations amplitude of both degrees of freedom in a short time.Therefore, the suggested process has suitable efficiency in vibration control.If the actuator and sensor are attached to the first degree of freedom (case S1-A1), the efficiency of the control process is reduced.Also, the control case S1-A2 has lower efficiency than the two other cases.In fact S2-A1 is incapable of controlling the system.This subject clearly proves that the proposed algorithm for determining the actuator and sensors locations works property. Five Story Shear Building. Figure 4 shows a five-story shear building, modeled by lumped mass and lateral stiffness (five horizontal degrees of freedom).This structure is analyzed in two different cases.In the first analysis, a harmonic load that is, () = 50 sin(10 ), is applied to the fifth-story building.The second analysis is performed when the structure is excited by the Elcentro ground acceleration record. In both analyses, the damping ratio of 5% for the first mode is assumed for constructing the Rayleigh damping matrix with two factors [31].To control the vibrations of the structure, the location of the sensors and actuators are determined based on the proposed algorithm.Table 3 presents the details which lead to the optimum locations of sensors and actuators.Using the results presented in Table 3, the various control algorithms with regard to the number of actuators and sensors have been inserted in Table 4. Three factors are considered in numerical evaluation of the proposed method: the maximum displacements of stories, the maximum kinetic energy of the structure, and the maximum drift.Table 5 shows the maximum structural displacement for the harmonic load.Structure with one actuator (degree of freedom of 5) and one sensor (degree of freedom of 4) A 5,4 -S 4 Structure with 2 actuators and one sensor A 5,4,3 -S 4 Structure with 3 actuators and 1 sensor A 5,4,3 -S 4,3 Structure with 3 actuators and 2 sensor A 5,4,3 -S 4,3,5 Structure with 3 actuators and 3 sensor A 5,4,3 -S 4,3,5,2 Structure with 3 actuators and 4 sensor A 5,4,3 -S 4,3,5,2,1 Structure with 3 actuators and 5 sensor According to the result of Table 5, in the case of using one actuator and one sensor, the displacement of upper floors can be reduced to about 65%.If the structural vibrations are controlled by using two actuators and one sensor the displacement of upper floors can be diminished about 65%.Moreover, the mentioned reduction rate will reach to 75% in the case of three actuators and two sensors.If three sensors and actuators are used, the maximum displacement in fourth and fifth floor will be reduced to about 80 and 75 percent, respectively.Another factor, utilized for the numerical evaluation of the proposed method is the maximum kinetic energy of structural dynamic analysis.Table 6 shows the maximum kinetic energy of fifth-story shear building subjected to the harmonic load.Accordingly, the proposed control approach has suitable performance in reducing the kinetic energy of the system so that by increasing number of sensors and actuators the kinetic energy is reduced considerably.Also, the variation of the fifth-story drift is different in various control cases, inserted in Table 7.It is clear that the performance of the proposed method increases by using more numbers of actuators. To assess the proposed algorithm against the broadband earthquake excitations, this structure is analyzed for the seismic load, that is, the Elcentro earthquake accelerograms.The maximum displacements of the fourth and fifth floors, caused by the El Centro Earthquake, have been inserted in Table 8. Results of Table 8 show that if the structural vibration is controlled by one actuator and one sensor, the displacements of upper floors are reduced to about 42%.Moreover, the mentioned reduction will reach to about 70% if three actuators and two sensors are used. Figure 5 shows the time history displacement of the fifthstory for different control processes.Moreover, the variation of the fifth floor acceleration for both controlled and controloff system has been plotted in Figure 6.Based on Figures 5 and 6 the proposed algorithm, presented for determining the locations of the sensor and actuator and also calculating the actuator's force, has a suitable applicability in numerical results. Moreover, the time history for both of the earthquake and the actuator forces (for the case of one actuator and one sensor) have been plotted in Figure 7.It is clear that the maximum actuators force is less than fifty percent of the floors weight.Moreover, Figure 7 shows that there is logical balance between the actuator's force and the earthquake load. Seven-Story Shear Building. Here, a seven-story shear building is analyzed so that the lumped mass and lateral stiffness of each story are 0.2591 kg and 100 N/cm, respectively.This structure is excited by the Elcentro acceleration.To control the vibration of the structure, the sensors and actuators locations are determined on proposed algorithm.The details of such procedure have been inserted in Table 9.Using the various control algorithms with regard the number of actuators and sensors, the maximum displacements of the sixth and seventh have been in Table 10.Results that the proposed control approach is suitable for reducing the maximum displacements of the system.Moreover, time history displacement the structure has been plotted in Figure 8 for different control cases.Figure 8 shows that efficiency of the proposed active control scheme increases if more number of sensor and actuator is used. All of the above dynamic analyses are completely stable so that any instability the control process of proposed algorithm is not considered.It is worth that the proposed active control mechanism does not impose any additional condition for stability of the analysis, because it is formulated based on the well-known structural dynamic theories.From this point of view, the prepared algorithm has suitable compatibility with dynamic behavior of structures. This subject reduces the instability potential that exists in any active control method. The Effect of Uncertainty in the Proposed Method There are many variables in the analysis and design of structures such as loads, capacity of elements, and material properties.Uncertainty in each of these variables has a significant effect on structure safety.Stories mass in this section are considered as random variables.To evaluate the uncertainty of stories mass in the proposed method, for each story, we considered a In the range of +15% and −15%, a set of numbers is generated by MATLAB as stories masses.In the stories masses in each of the cases considered purely (case1 until case 30) and each story may be reduced or increased to the base 11 shows the results of the proposed method using a sensor and an actuator, in various states of random masses in the range 15 percent compared to the base masses.According to the received results, changes in masses in the range of 15% caused displacement increment up to 14.6% and displacement reduction up to 10% for fourth floor.Also, in this condition, the displacement of fifth floor, increased up to 18% and decreased up to 14%. Applicability of the Proposed Method In the case of applicability of the proposed method in realistic structures, using of the piezoelectric stack could be useful for a lot of reasons.In this way the method which has been introduced by Kamada et al. [7] is proposed.In this method piezoelectric stack actuators are placed at the bottom of firststory columns in both sides.If the induced voltage is applied in reverse phase to these actuators, the concentrated bending moments are produced at the bottom of columns.By applying the basic structural calculations, the equivalent shear force at each story level is determined as follows: where , , and are the columns height, actuators produced force per unite voltage, and equivalent shear force produced by actuators in first floor level, respectively.Other parameters have been shown in Figure 9. Conclusions In this paper, a new method was developed for active control of structure.This process was based on the theory of structural dynamics.In the proposed model, the structural fluctuations are controlled by multiple actuators and sensors and the actuator has been modeled as an additional viscous damper.In this way, the actuators forces are calculated in such a way that a greater number of low structural damping of the various modes of vibration is critical.Actuators forces are calculated in such a way that more first equivalent modal coordinates damping is critical.To consider the proposed method, the changes in maximum displacement, the maximum kinetic energy, and total maximum relative displacement of stories were considered in various control cases in a five-story structure.Based on the numerical results, in the case of using an actuator and one sensor, the displacement of upper floors can be reduced to about 65% compared to the case of lack of control system.Also, the displacement in upper stories reduced in the case of using two actuators and one sensor.Moreover, in the case of three actuators and two sensors at the proper locations the maximum upper stories displacement reduced up to 75%.In term of maximum kinetic energy, using one actuator and one sensor reduced this item up to 90%.Beside, using three actuators and sensors caused the total of stories diminish up to 70%.In addition, the uncertainty effect the mass of stories on the floor's displacement was (a) Set = 0 and select the time step of dynamic analysis.(b) Construct the stiffness, mass, and damping matrices.(c) Determine the actuators and sensors locations using modal shape matrix and its inverse.(d) Calculate the modal velocities.(e) Solve set of equation to determine the actuators forces. Figure 2 : Figure 2: The displacement-time response to the first degree of freedom of 2D system. Figure 3 : Figure 3: The displacement-time response to the second degree of freedom of 2D system. Figure 5 : Figure 5: The time history displacement for fifth-story of shear building excited by the Elcentro earthquake. Figure 6 : Figure 6: history acceleration for fifth-story of shear building excited by the Elcentro earthquake. Figure 7 : 4 Figure 8 : Figure 7: Comparison between the actuator and the earthquake forces in the case of using one actuator and one sensor. Figure 9 : Figure 9: The piezoelectric actuator's mechanism (a) front view and (b) side view. Table 1 : Common Active Control algorithms. Table 2 : Suitable sensor and actuator locations. Table 3 : The actuator and sensor locations for the five-story shear building. Table 4 : Different control cases for the five-story shear building. Table 5 : The maximum structural displacements for the harmonic load. Table 6 : The maximum kinetic energy for the five-story shear building subjected to the harmonic load. Table 7 : The story drift of the fifth floor for the five-story shear building subjected to the harmonic load. Table 8 : The maximum displacement of the fourth and fifth floors in the five-story shear building excited by the Elcentro earthquake. Table 9 : The actuator and sensor locations for seven-story shear building. Table 10 : The maximum displacements of the seven-story shear building excited by the Elcentro earthquake. Table 11 : The effect of uncertainty on fourth and fifth floor displacement.
6,409.2
2014-03-31T00:00:00.000
[ "Engineering", "Physics" ]
Predictions of CD4 lymphocytes’ count in HIV patients from complete blood count Background HIV diagnosis, prognostic and treatment requires T CD4 lymphocytes’ number from flow cytometry, an expensive technique often not available to people in developing countries. The aim of this work is to apply a previous developed methodology that predicts T CD4 lymphocytes’ value based on total white blood cell (WBC) count and lymphocytes count applying sets theory, from information taken from the Complete Blood Count (CBC). Methods Sets theory was used to classify into groups named A, B, C and D the number of leucocytes/mm3, lymphocytes/mm3, and CD4/μL3 subpopulation per flow cytometry of 800 HIV diagnosed patients. Union between sets A and C, and B and D were assessed, and intersection between both unions was described in order to establish the belonging percentage to these sets. Results were classified into eight ranges taken by 1000 leucocytes/mm3, calculating the belonging percentage of each range with respect to the whole sample. Results Intersection (A ∪ C) ∩ (B ∪ D) showed an effectiveness in the prediction of 81.44% for the range between 4000 and 4999 leukocytes, 91.89% for the range between 3000 and 3999, and 100% for the range below 3000. Conclusions Usefulness and clinical applicability of a methodology based on sets theory were confirmed to predict the T CD4 lymphocytes’ value, beginning with WBC and lymphocytes’ count from CBC. This methodology is new, objective, and has lower costs than the flow cytometry which is currently considered as Gold Standard. Background HIV infection has affected around 60 million people to date [1]. In 2009, there were 33.3 million people living with HIV worldwide; 2.6 million new cases were presented and 1.8 million deaths were secondary to AIDS in the same year [2]. By 2009, Sub-Saharan Africa was the leading region in the world for deaths caused by AIDS, recording 1.3 million cases [2]. Even though AIDS is a global problem, countries with fewer resources are mostly affected [2,3]. HIV is a retrovirus that mainly affects T cells and those cells that express CD4, such as macrophages, follicular dendritic cells and lymph nodes [4]. In the natural history of HIV infection, there is an initial decrease in the number of TCD4 lymphocytes that relates to the clinical primary infection (2 weeks after infection); then a partial recovery occurs, due to atypical lymphocytes and to an increase in T CD8 lymphocytes (3-4 weeks after exposure). Finally the number of lymphocytes decreases again; slowly during the latent period and faster during the final stage which is characterized by a notorious immunodeficiency with CD4 counts below 500 CD4/μl 3 [4]. For this reason, both the percentage of T CD4 lymphocytes and the occurrence of opportunistic infections define the stages of HIV infection and provide treatment guidelines. Currently this percentage is one of the referenced biological and immunological markers for HIV infection and AIDS control and it is also a predictor of mortality [5]. Its determination is the result of three laboratory steps: count of WBC, percentage of WBC that are lymphocytes or differential count, and percentage of CD4 lymphocytes. This last stage is performed by a technique known as "immunophenotyping by flow cytometry", which consists in the detection of CD4 antigenic determinants on the surface of WBC using monoclonal antibodies labeled with fluorescein [6,7]. However, this procedure has several limitations, such as a delay of more than 24 hours between blood collection and its processing, and the costs of equipment and reagents for flow cytometry, which make it inaccessible to some developing countries, especially Africa [5][6][7][8][9]. Given the large impact that HIV/AIDS represent for global public health, it has been sought to make flow cytometry more accessible by implementing simplified flow cytometers that are chargeable by battery or solar panels [8]. On the other hand it has been sought to replace it by methods of CD4 lymphocytes count prediction from CBC parameters [10,11], epidemiological variables [12,13] or machine learning [14]. There is a cross-sectional study of CD4 prediction from CBC parameters, which used the combined values of total T lymphocytes and hemoglobin to deduce CD4 counts <200 cells/μL 3 ; however, when this prediction was compared to the deduction based on total lymphocytes, it was found that in male patients sensitivity increased with no changes in specificity, and in female patients sensitivity did not change and specificity decreased [10]. A cross-sectional study that assessed the usefulness of total lymphocyte count as surrogate marker of T CD4 lymphocyte's count in HIV-positive patients found that there is a high correlation [11]. However, low sensitivity of total lymphocyte count was found in the classification of patients with CD4 counts <200 cells/μL [11]. Another epidemiological study, sought to predict the variability of T CD4 lymphocytes' decrease in seropositive patients by determining the distribution of CD4 counts in seronegative patients and survival rates after acquiring HIV infection [12]. This model was applied to different populations and individuals showing accuracy predictions over 75% with respect to the real value of T CD4 lymphocytes variability [13]. In the model proposed by Singh and Mars, based on machine-learning, the CD4 final count is obtained from the viral load values and the number of weeks after the first T CD4 lymphocytes' count, with an accuracy of 83% with respect to the real value [14]. In a previous study, Rodríguez et al. [15] developed a new methodology applying sets theory to predict T CD4 lymphocytes' count based on individual values of total WBC and lymphocytes obtained from CBCs. In that work 110 CBCs were analyzed and then classified into four sets named A, B, C and D, where union between sets A and C and union between sets B and D were evaluated, as well as the intersection of both unions. These results were classified into eight ranges of 1,000 leukocytes/mm 3 each, for its evaluation. The conclusion was that ranges below 5000-4000 leukocytes/ml 3 predict CD4 counts lower than 570 CD4/μL 3 with effectiveness percentages between 90-100% [15]. This showed that the study of the variation process of T CD4 lymphocytes' count reveals an underlying mathematical order when observed through theoretical abstractions; this order allows making simple predictions that are independent of virus characteristics or patient variables. The aim of this work is to validate the clinical application of the methodology developed based on sets theory, applying it to a larger sample of HIV-positive cases. Definitions Determined Sets for the study of leukocytes/mm 3 , lymphocytes/mm 3 and CD4/μL 3 populations [15]: Where (x, y, z) is a triplet of values, being "x" the number of WBC, "y" the number of lymphocytes and "z" the T CD4 lymphocytes' count. It is a study in which a physical-mathematical previously developed methodology based on sets theory is applied in order to predict T-CD4 lymphocytes' count. It is based on the mathematical analysis of the total WBC and lymphocytes' count in HIV-positive patients. Sample Printed CBCs of 800 HIV diagnosed patients were used, without distinction of gender, age, population kind, or clinical variables such as infection stage, hemoglobin value or medications used. The CBCs were taken from storage tests in a physical data-base of the infectologist who participated in the study. Procedure First, records of leukocytes/mm 3 , lymphocytes/mm 3 and CD4/μL 3 subpopulation counts measured by flow cytometry were taken. Then, they were organized in descending order according to the WBC number, establishing ranges of 1000 leukocytes/mm 3 . Values higher than 10.000/mm 3 were assigned to a single range as well as values lower than 3.000/mm 3 , so a total of 9 ranges were established in order to observe mathematical relationships between populations, independent of time or patient's evolution. According to the previously developed methodology, records were evaluated by establishing if they belonged or not to sets A∪C and B∪D, as well as to set (A∪C) ∩ (B∪D), using a software that was previously developed based on sets algebra [15]. This software calculates the range of values in which the T CD4 lymphocytes' count is, beginning with WBC and lymphocytes number from CBC and applying the evaluated predictive methodology. Results for the 9 leukocytes ranges were assessed, determining the elements number that belong to each set in each range and the percentage of success to which it corresponds to, according to the total number found for each range. In addition, the same values were established for the whole sample. In this work, the belonging percentage of each range to each one of sets is equivalent to the effectiveness percentage of prediction for such range. When a triplet of values belongs to all sets, this fact means that this triplet met with the condition of have a leukocyte value equal to or higher than 6800/mm 3 , with a lymphocyte value equal to or higher than 1800/mm 3 and with a CD4 cell value equal to or higher than 300/mm 3 or it may have a leukocyte value lesser than 6800/mm 3 , with a lymphocyte value equal to or lesser than 2600/mm 3 and with a CD4 cell value equal to or lesser than 570/mm 3 . Statistical analysis Some performance measures were established for each range through a binary classification performance measurement, where True positive (TP) is the number of cases with a correct prediction in the range with respect to real values, False negative (FN) is the number of wrong predictions in the range with respect to real values, and finally True negative (TN) is the total number of correct predictions in the other ranges. The performance measures calculated for each range were Sensitivity (SENS), and Negative Predictive Value (NPV); the first one which was calculated with the next equation: Otherwise, Negative Predictive Value (NPV) and was calculated by means of the next equation: Ethic aspects This study follows the laws established on articles 11 and 13 of the 008430 Colombia's Health Ministry resolution of 1993 given that physical calculations were made based on results of medically prescribed tests of the clinical practice, from an anonymous database retrospectively evaluated, with no risks to patients, protecting the integrity and anonymity of participants and with no need of informed consents. The approval of an ethics committee of a specific institution is not needed because it was accessed only numerical values of the database (without access to the names, data source or clinic history of patients), collected specifically for research purposes by one of the authors. Results Belonging of leukocytes, lymphocytes and CD4 cells values to each set in 27 specific samples is shown at the Table 1). Table 2 shows that effectiveness percentage of the prediction for set A ∪ C according to each range, was between 68.42% and 100%, for set B ∪ D was between 65.66% and 100%, and for intersection set (A ∪ C) ∩ (B ∪ D) was between 55.64% and 100%. Effectiveness percentage of the prediction for the total number of cases to set A ∪ C was 81%, and to set B∪D was 80%, whereas for total number of cases to intersection set (A ∪ C) ∩ (B ∪ D) was 73.25% (See Table 2), being equal or above 73.91% in 6 out of the 9 established ranges, and over 81.44% in 5 ranges. This effectiveness percentage to the intersection (A ∪ C) ∩ (B ∪ D) was higher for the upper and lower ranges; which was between 83.05% and 83.33% for the ranges of 8000-8999 and 9000-9999, respectively; and was between 81.44% and 91.89% for the ranges of 4000-4999 and 3000-3999, respectively. For the range of leukocytes below 3000, that has more utility in clinical setting, the effectiveness percentage was of 100% (See Table 2). Statistical analysis results TP values ranged between 17 and 136, TN values were between 450 and 569, and FN between 0 and 59. Values for SENS ranged between 0.56 and 1, and values for NPV were between 0.89 and 1. The highest SENS values were for the ranges of 10000 leukocytes or more, between 9999-9000, 3999-3000, and for the range of 2999 leukocytes or less; the first three had values of 0.99 and the last one of 1.The NPVs showed values equal to or greater than 0.98 in 5 out of the 9 assessed ranges (See Table 3). Discussion This is the first work in which a new predictive methodology of T CD4 lymphocytes' count is applied to a sample of 800 HIV-positive patients. This methodology was developed beginning with the analysis of WBC and lymphocytes' count from CBC, and it is based on sets theory. Its predictive percentages are equal or above 73.91% for 6 out of 9 measured ranges, confirming its predictive capacity and clinical applicability independently of epidemiological and clinical variables. Sensitivity values over 0.80 were founding 5 of the 9 measured ranges; specificity was not calculated, given that there are no False Positives. Taking into account that the starting of anti-retroviral treatment is suggested at 300 CD4 cells/cm 3 , this predictive methodology showed an effectiveness percentage of prediction of 100% when leukocyte values were less than 3000.This means that a value of CD4 less than 570/ mm 3 is predicted for all these cases. The belonging percentage to set A ∪ C is greater than the percentage to set B ∪ D, showing the specificity of T CD4 lymphocytes' values and evidencing the difficulty to find results that allow their prediction. In contradistinction to the previous work [15], one more range of leukocytes, from 3999 to 3000, was quantified in this work in order to study more specifically the ranges of values that have greater clinical importance. High values in the predictions were found, with percentages over 73% and even of 100% for high and low ranges, which are clinically the most important. The mathematical theory through which predictions are obtained does not allow the establishment of False Positives in the statistical analysis, given that each set, as well as the intersections that constitute the prediction, exclude the possibility of finding triplets that allow obtaining a False Positive prediction. This is the reason why it is not possible to establish a positive predictive value, showing that this mathematical inductive way of thinking can't be taken directly from traditional statistical parameters; instead of that, the sets algebra way of thinking achieves deductive predictions of clinical importance. Works performed with the aim to simplify and reduce costs of HIV patients follow-up are mostly epidemiological, with limitations in the prediction of immunological biomarkers [10][11][12][13], given that its study from epidemiological variables or virus characteristics does not allow a complete, rigorous, objective, and also simple and reproducible analysis of the immunological response of such patients. Such is the case of cross-sectional descriptive studies that try to deduce T CD4 lymphocytes from CBC parameters such as total lymphocytes' count or hemoglobin. However, although a correlation between these parameters has been found, the sensitivity of deductions varies according to gender [10] or to CD4 count itself [11]. Also, studies have sought to predict the variability of T CD4 lymphocytes' [12,13]. Other studies based on machine learning, as the one proposed by Singh and Mars [14] to obtain the latest CD4 count have an accuracy not greater than 90% and require a previous count of T CD4 lymphocytes and values of viral load, which represent a moderate additional cost. Based on neural networks and machine learning, some methodologies propose viral load measurement as marker of treatment response in HIV-infected patients. The limitation of these experimental methodologies is that they don't take into account the immune response of the patient, but genotypic virus characteristics [16][17][18][19], which result in expensive flow cytometry tests [8,9]. In contrast to these studies, the present study applies a methodology that uses more accessible data such as the CBC and analyses it objectively from a sets theory approach in order to deduce the value of T CD4 lymphocytes with a high success percentage. This study provides useful scientific contributions for the development of control measures and management of HIV/AIDS pandemic; contributions of clinical applicability that may optimize care and follow-up of patients that suffer from this disease. In the context of dynamic systems, a descriptive but not predictive model of the immune response to HIV dynamics was developed by plotting how T CD4, CD8, B lymphocytes and antibodies act, and how the viral load progresses [20]. The present paper makes an analysis of the variation process of WBC and lymphocytes populations in HIV patients, but also allows the prediction of CD4 subtype. Furthermore, since it is based on a mathematical approach, it does not require statistical analysis, as it is not required in the study of physical phenomena such as predicting the trajectory of planets or an eclipse. Conclusions This study confirms the predictive capacity of the developed methodology based on sets theory to determine the number of T CD4 cells based on WBC and lymphocytes' count, achieving a 91.89% effectiveness for the range between 3000 and 3999 leukocytes, and 100% for the range below 3000 leukocytes. This methodology can be useful to determine the number of CD4 in places where there is no easy access to flow cytometry, reducing costs in determining the state of patients with HIV/AIDS.
4,002.6
2013-09-14T00:00:00.000
[ "Medicine", "Biology" ]
Effect of weave type on abrasive strength of cotton fiber Abrasion has a strong relationship with the overall performance of fabrics. There are number of factors that have an impact of abrasive strength of textile materials. These include type of fiber, nature of yarn, yarn number, weave structure and type, number of interlacing in warp and weft. Abraded fabrics look bad in appearance and they no longer can be able to give required service for which they are intended to provide. This study aimed at determining the effect of various weave types on abrasive strength of cotton fabric. The fabric was manufactured through ring spinning technique with three weave structures such as plain, twill and satin. Manufactured fabrics were evaluated for their abrasive strength by following ASTM 4966 test standard. These were evaluated at various laundering intervals such as at 0, 5, 10 and 15. The mass loss in percentage of was then calculated. It was concluded that plain weave structure was the most strongest in terms of its abrasive strength followed by twill and satin. Small floats helped to make the fabric hold components yarns together and provide cohesion to the finished product. It is important for the textile manufacturers and fashion designers to look for various weave structures and their impact on products in order to make creative variations to attract their customers. Introduction The performance behavior of fabrics largely depends on many technical aspects that must be planned and designed during manufacturing phase [1]. Weaving structure of finished fabrics is one of the main causes of its high or low performance, as it has a strong relationship with an overall performance. Moreover, some other factors have also an indirect effect on the serviceability of end product such as weaving environment, temperature, humidity, yarn density, yarn number, interlacing pattern and yarn tension etc. [2]. Woven fabrics are made by interlacing two sets of yarns, one running in the lengthwise direction known as warp or ends and another running crosswise is known as weft or picks [3]. Fibers content, yarn structure and weave type are some of the important key factors that dictate the performance of an end product. They affect many physical and chemical properties of fabrics [4]. Woven fabrics are supposed to provide strength to the material and are considered long lasting as compared to any other manufacturing technique. Their performance behavior in terms of strength, flexibility, extensibility and durability is depended on the way of interlacing of yarns in warp and weft directions, thread count per inch, type of twist and yarn count [5]. Woven materials are more versatile due to their unique construction. They can have variation in their appearance through change in number of upward and downward harnesses. There are many weave structures which have an effect on the external forces applied on the end product creating change in appearance and shape. The weaving structure also affects the mechanical behavior of products such as the way of load, stress and strain curve, percentage of extension and applied force etc. [6]. Fabrics undergo many wear and tear conditions with the passage of time. The resistance against abrasion is a very complex phenomenon. It has a strong relationship with the durability and strength of fabrics. It is of great importance in apparel, upholstery and technical textiles. [7]. Abrasion is the rupture of component fibers, yarns and fabrics due to their rubbing over some other surface. This deteriorates an overall performance of fabrics. It also differs from individual to individual due to their style of using and caring the garments [8]. It may be caused due to simple rubbing, during wear and tear conditions and while laundering methods Pilling caused by rubbing is always a problem for woven and knitted fabrics. It displaces the fibers or yarns from their original place and creates small bundles that strongly anchor to the fabric surface. Abrasion can completely damage the fabric by tearing a yarn or thread, decreasing its thickness, changing color and even making a hole in the fabric. This problem aggravates more in filament yarns especially made from synthetic fibers [11, 12]. The harshness caused by abrasion is usually based on particle size, its shape, force applied and frequency of rubbing against any abrader [13]. One state of abrasion occurs where solid particles are able to move their place from one area to another and damage the surface. Whereas, another state is caused where particles are roll away from the surface of substrate [12]. Materials and methods Ring spun yarns were produced to make woven fabrics with cotton fibers. Three types of weaves such as plain, twill and satin were selected to manufacture these fabrics. The manufacturing process was done at Nishat Mills Private Limited. Construction parameters of manufactured fabrics are given in (Table 1). These fabrics were then tested for their abrasive strength at 0, 5, 10 and 15 washing intervals. . It has been observed changes in luster and brightness of observed samples of woven fabric before and after their abrasive cycles. It was concluded that after abrasion, samples became flat and smooth and their brightness was changed due to the color loss from the surface. Yarns were suppressed due to pressure and making alterations in the light reflection and absorption at specific points. The resultant fabric appeared in combination of light and dark tones throughout its vertical length [20]. Table 3 depicts that there is a significant difference among samples made with twill and satin weave as the p-value was 0.01 and 0.00 respectively at 1000, 2000 and 3000 rubbing cycles after 0,5,10 and 15 washes. Whereas, there is an insignificant difference among samples manufactured by following plain interlacing pattern as pvalue is greater than 0.05. It can be said that less number of interlacing helped to provide strength to the woven fabric. Plain interlacing pattern of construction is better able to make fabric strong in terms of tensile and abrasion. It was due to the reason that yarns are closely packed to each other. They make a dense cover and provide protection to the substrate (Table 3). Table 3. Tests of within-subjects contrasts at various rubbing cycles On the other hand, long floats are more prone to abrasion such as in the case of satin weave. There are more chances of yarn slippage with long floats as well. [21]. Uneven yarns are low in their abrasive strength due to their thick and thin regions. These irregularities make the fabric weak and unable to bear the pressure of abrasion. [22]. Coarse yarns are supposed to be low in their abrasive strength than fine yarns. Because these yarns are weak in nature and unable to resist any external pressure. Their surface is not smooth, they do not help to constitute strong base thus low in quality [23]. With an increase in laundering cycles, the mass of tested samples was reduced due to vigorous rubbing in the washing machine.
1,612.8
2020-08-12T00:00:00.000
[ "Materials Science" ]
Co-Orientation: Quantifying Simultaneous Co-Localization and Orientational Alignment of Filaments in Light Microscopy Co-localization analysis is a widely used tool to seek evidence for functional interactions between molecules in different color channels in microscopic images. Here we extend the basic co-localization analysis by including the orientations of the structures on which the molecules reside. We refer to the combination of co-localization of molecules and orientational alignment of the structures on which they reside as co-orientation. Because the orientation varies with the length scale at which it is evaluated, we consider this scale as a separate informative dimension in the analysis. Additionally we introduce a data driven method for testing the statistical significance of the co-orientation and provide a method for visualizing the local co-orientation strength in images. We demonstrate our methods on simulated localization microscopy data of filamentous structures, as well as experimental images of similar structures acquired with localization microscopy in different color channels. We also show that in cultured primary HUVEC endothelial cells, filaments of the intermediate filament vimentin run close to and parallel with microtubuli. In contrast, no co-orientation was found between keratin and actin filaments. Co-orientation between vimentin and tubulin was also observed in an endothelial cell line, albeit to a lesser extent, but not in 3T3 fibroblasts. These data therefore suggest that microtubuli functionally interact with the vimentin network in a cell-type specific manner. Introduction Cytoskeletal protein networks serve a number of crucial roles in living cells. Traditionally, three types of cytoskeletal networks are discriminated [1]. First, thin filaments with a diameter of about 10 nm, which consist of actin polymers with associated cross-linking proteins and "muscle-like" myosins give stiffness to cells and play important roles in the generation of motile forces. Second, microtubules, which consist of hollow tubules of the protein tubulin with an outer diameter of approximately 23 nm. Microtubules run throughout the cell and play a dominant role as cellular highways for the transport of cargo, which can be moved either outwards from or inwards to the center of the cell by specific, ATP-consuming motor proteins. The third type of cytoskeleton are termed intermediate filaments due to their intermediate unit-filament diameter. Over 60 different proteins such as keratins, vimentin and lamins have been identified, most of which have a strict cell type-specific distribution. Whereas each of these filament systems, their subunits and methods of polymerization have been the subject of many thousands of studies, remarkably little is known on how the three principal filament systems may interact and collaborate to keep the cell alive and functioning. This is due in part because imaging with confocal fluorescence microscopy provides insufficient resolution to reliably discriminate individual filaments in most cases, whereas electron microscopy does provide ample resolution but is much less suited to routinely identify and track the different filaments. The recent advances in optical super-resolution microscopy, including localization microscopy [2][3][4][5][6] and STED microscopy [7] do provide sufficient resolution to distinguish individual fluorescently labeled filaments within the cell, and they can be routinely applied in a convenient manner. The availability of superresolved multicolor images of filaments introduces the need for new quantitative tools to interrogate the organization of and mutual interrelations between the different cytoskeletal elements. Tools developed for diffraction limited fluorescence microscopy focused on the problem of co-localization analysis. This analysis asks whether images show evidence for possible interactions between the molecules imaged in both color channels. Typically the answer to this question is expressed in terms of: 1) the Pearson correlation coefficient between the intensities [8]; 2) the Manders coefficients, which are defined as the fraction of the total intensity per channel that occurs in co-localizing pixels [9], i.e. pixels whose values in both channels exceed certain thresholds; or 3) the overlap fractions of segmented objects in both color channels [10]. The different measures of co-localization cannot simply be applied to localization microscopy techniques; these techniques produce datasets consisting of coordinates of localized molecules instead of intensity values in pixels. This suggests that coordinate based analyses of distances between molecules should be used instead. Proposed measures include: the pair-correlation function between coordinates in two color channels [11]; a hypothetical potential energy function that is estimated from the distances from each localization to the nearest neighbor in the other color channel [12]; and the rank correlation between the distances from a localization to its neighbors in the same color channel on the one hand and distances to its neighbors in the other channel on the other hand [13]. However, all these analyses only consider the spatial proximity of molecules in different color channels. They do not take into account that the molecules reside on extensive structures such as filaments that have additional geometric features such as size, orientation or curvature. Here we report a rigorous quantitative framework for analyzing the simultaneous co-localization and similarity in orientation of structures in multicolor images. We will refer to the combination of co-localization and orientational alignment as co-orientation. We focus here on the orientation as a geometric feature as it presents a particularly salient property of cytoskeletal filament networks. Because the orientation varies with the length scale at which it is evaluated, we include this scale as a separate informative dimension for the analysis. We demonstrate our methods on simulated localization microscopy data of filament structures, as well as experimental images of filamentous structures acquired with localization microscopy in different color channels. Software for our co-orientation analysis is freely available in the form of Matlab code at http://www.diplib.org/add-ons/. Orientation measurement The co-orientation analysis starts with the determination of the orientation in each color channel. The two images of two different molecular species imaged in color channels l = 1, 2 will be denoted with I l ðxÞ. For now we will assume these to be two-dimensional and we will discuss the generalization to three-dimensional images below. In this work we will only apply our methods to localization microscopy data. The estimated fluorophore coordinates are converted into images by binning them into two-dimensional histogram with bin sizes of 10 nm. It should be noted here that although all subsequent operations are carried out on pixelated images, this is not problematic when the pixel size is smaller than 1.5 times the localization precision [14] because the information lost at small length scale is limited. For smaller pixel sizes we do not expect that the choice of pixel size affects any outcomes. Note also that in principle rendering localizations as Gaussian blobs the size of the localization error distribution provides a better data representation than the histogram binning applied here [15]. However, in practice this rendering is too slow due for the large number of required renderings for the significance tests that are discussed below. The orientations of the filaments in the images are analyzed by considering orientation space representations I l ðx; Þ [16], which quantify for each positionx how much evidence there is for the presence of structures with an orientation ϕ. By considering multiple orientations, it is possible to determine the orientations of several crossing filaments at the same location. To compute I 1 ðx; Þ and I 2 ðx; Þ, the images I 1 ðxÞ and I 2 ðxÞ are first filtered with a set of orientation selective filters Fx; ð Þ, which have an orientation ϕ between −π/2 and π/2 with respect to the x-axis. Applying these filters gives the orientation space representation: where à denotes the convolution operation, and the filters Fx; ð Þare defined by their Fourier transforms:F Here ϕ q is the angle ofq with respect to the x-axis, w ϕ is the angular bandwidth of the filter, s o is the length scale for which the orientation is evaluated and w q is the bandwidth of the filter with respect to the spatial frequency magnitude q ¼ jqj. For this work we chose w q = 0.8/s o and the orientation scale s o was determined by selecting the smallest value that still had a good orientation selectivity upon visual inspection of the orientation space representation. Generally, the scale should be set such that the features of interest have a high contrast with respect to the local background and a high contrast with respect to the responses at the same location to filters with different orientations. However, it does not make sense to choose a scale smaller than the resolution of the images [14]. The width w ϕ is derived from the number of independent orientations n o that are analyzed via w ϕ = π/n o . Here we used n o = 41 for simulated datasets and for experimental datasets, which gives an angular resolution of about 77 mrad. This is on the same order as the angular extent of linelike structures with a width w at a scale s o which is w/s o * 0.05 (for w * 10 nm and s o = 200 nm). Note that by definition I lx ; þ p ð Þ¼I lx ; ð Þ. Next, we take the absolute value of the orientation space representation and subtract the minimum value per locationx. Subsequently we normalize the outcome such that the sum over ϕ in each location equals the number of localizations by computing: Àp=2 jI l ðx; 0 Þjd 0 À p min ðjI l ðx; ÞjÞ Þcan be interpreted as the expected density of localizations in channel l at positionx belonging to molecules in filaments with local orientation ϕ. The subtraction of the minimum corrects for the non-zero response given by the filters Fx; ð Þfor orientations that do not correspond to the orientations of the filaments atx. For three-dimensional images, the three-dimensional orientation can be analyzed in a similar manner, see e.g. [18]. The generalization of the normalization in Eq 4 for three-dimensional orientation space representation involves normalization over solid angles. However, the orientation difference can always be expressed as a single angle. Co-orientation analysis The next step in the analysis is to define a measure that quantifies both the co-localization and orientational alignment of structures in the two color channels. For this purpose we extend the concept of the cross-correlation function used in localization microscopy [11] to the generalized cross-correlation function: where h.i denotes the averaging operation over bothx and ϕ. The averaging over the spatial coordinatex is restricted to the selected region of interest, which typically excludes regions outside cells. The multiplication with π gives c Dx; D ð Þ¼1 for statistically independent images. Often it will be convenient to compute the average of c Dx; D ð Þover circles of constant distance jDxj ¼ r, which we will denote with c(r, Δϕ). An illustration of the steps needed to compute c Dx; D ð Þfrom the superresolution images is shown in Fig 1. The cross-correlation in c Dx; D ð Þis efficiently computed using three-dimensional (x, y, ϕ) Fourier transformations: where W is a two-dimensional binary mask image that has a value of 1 inside the selected region of interest and 0 outside. The interpretation of c(r, Δϕ) is as follows: for a typical point on a filament in one channel, it is the density of filaments in the other channel at a distance r with a relative orientation (i.e. angle with the first filament) of ϕ which is normalized by the density that would have been obtained if the filaments were statistically independent. Alternatively, it could also be interpreted as a normalized probability density for two randomly chosen points on two filaments in different color channels to have a separation r and an orientation difference ϕ between the filaments they belong to. Several examples to illustrate the interpretation of the co-orientation plot are shown in S1 Fig, S2 Fig and S3 Fig. Testing for statistical significance A measure for the strength of the co-orientation in an image is given by the normalized anisotropic Ripley's K statistic K k (R), which is computed as: where A denotes a circular domain with radius R. The rationale for choosing a cos (2Δϕ) weight is the following: assuming that c Dx; ð Þis symmetric with respect to Δϕ, this weight returns the strength of the second nonzero term of a Fourier series expansion of c Dx; ð Þ. Therefore it expresses to first order the tendency of c Dx; ð Þto assume higher values for smaller angles Δϕ. Filaments with relative smaller angles contribute positively to K k (R) whereas perpendicularly crossing filaments have a negative contribution. The first term in the same Fourier series expansion of c Dx; ð Þhas a constant weight with respect to Δϕ and thus gives a result that is proportional to Ripley's K statistic and expresses co-localization rather than co-orientation. The higher order terms in the Fourier series expansion could be used to describe more complicated relationships between the co-localization and orientations of filaments. The anisotropic Ripley's K statistic K k (R) was used to test the statistical significance of the co-orientation of individual images. The radius R is chosen beforehand by the experimenter and expresses the range of the co-orientation effect. In theory, all possible radii R could be relevant and could all be tested, while keeping in mind that tests at different radii are not statistically independent. However, in practice this is unnecessarily complicated and a single radius R can be set such that the main peak in the co-orientation plot at small distances r is captured in the significance test. Alternatively, prior expectations about the range of physically meaningful effects can also be used to determine a single value of R for testing. The null hypothesis for the significance test is that the filaments in both color channels do not interact and are thus statistically independent, which implies that the expected value of Steps for obtaining the co-orientation plot. To compute the co-orientation plot, the images in both color channels are first processed by a filter bank of orientation selective filters (shown here for an orientation scale of 100 nm). This provides orientation space representations of both channels with the evidence per orientation in each pixel. The cross-correlation between these representations then leads to the co-orientation plot showing the correlation c as a function of the distance between localizations and angle between the filaments they belong to. K k (R) is 0. The expected deviations from 0 under the null hypothesis are very difficult to treat analytically due to the statistical dependencies between the localizations in each color channel [19]. These dependencies arise firstly because the localized molecules are constrained in their positions because they reside in filaments and secondly because each molecule is localized multiple times. Therefore we assume as a working assumption that under the null hypothesis, K k (R) is normally distributed with a mean value of 0 and variance s 2 K , which was estimated as follows. Firstly, a circular region of interest is selected in the images. Next, the image of the second color channel is rotated with respect to the image of the first color channel over equally spaced angles θ between 0 and 2π. Note that the ROI was chosen to be circular in order to ensure that the sum of pixel values in each channel does not change with the rotation. For each rotation we recomputed K k (R), giving the co-orientation strength per rotation K k (R; θ). The variance s 2 K was then computed as: where n θ is the number of angles θ (see S1 Text for a derivation). Given s 2 K , the probability of having a value K k (R) at θ = 0 under the null hypothesis is given by where erf(.) denotes the error function. Note that our method resembles the approach of Van Steensel et al. [20] for qualitatively determining if the co-localization in diffraction limited fluorescence imaging may be significant. In this approach the image in one color channel is shifted instead of rotated. Furthermore, it is important to note that s 2 K does not accurately predict the uncertainty in K k (R) if the null hypothesis does not hold. Therefore it cannot be used to test differences in co-orientation strength between images. Instead, sets of values for K k (R) obtained from several datasets representing one biological condition can be compared with another set of values representing another condition using standard statistical tests such as the Mann-Whitney U test [21]. Local co-orientation In order to detect which parts of a region of interest exhibit the strongest co-orientation, we developed a scheme for visualizing the local co-orientation strength. In this scheme we determine K k (R) in square subregions of the image with a size of 3R which were displaced by multiples of R horizontally or vertically with respect to each other, i.e. two-thirds of the pixels in each region overlapped with two-thirds of the pixels in each adjacent region. For each subregion, we took the previously determined orientation space representationsĨ lx ; ð Þand used it to compute c Dx; D ð Þ, where the average densities hI l i across the field of view were used in the denominator rather than the averages per subregion. K k (R) then follows from c Dx; D ð Þas before. To ensure a smooth visualization, the values of K k (R) were assigned to the center point of each subregion and linearly interpolated in between these points. A visualization of the local co-orientation was then obtained by applying a blue overlay to the image of the filaments, where the negative pixel values were set to 0, the brightest 3% of the pixels were clipped and the remaining pixels were linearly scaled between 0 and 255. See S6 Fig for an example of how the percentage of clipped pixels affects the appearance of the overlay. Note that in this visualization scheme, crossings of filaments lead to a low score for the local co-orientation strength which may be unintuitive in some cases. Instead, it is also possible to replace the cos(2ϕ) weight in the computation of K k (R) in Eq 7 by a cos 2 (ϕ) weight. However, unlike with the cos(2ϕ) weighting, the cos 2 (ϕ) weighting also makes the score sensitive to mere co-localization without orientational alignment. Therefore it is generally best to compare images with both kinds of weighting for identifying areas with strong co-orientation. A somewhat computationally faster method to approximate the local co-orientation strength can be implemented using convolution operations. Specifically, the orientation space representationĨ 1 has to be convolved with a kernel gx; ð Þ ¼ cos 2 ð ÞOx=R ð Þ, subsequently multiplied byĨ 2 and summed over ϕ, followed by a smoothing with a kernel Ox=3R ð Þand finally a multiplication by a normalization constant. Here the circular kernel Ox ð Þ ¼ 1 if jxj < 1 and 0 otherwise. Simulations of test data Simulated localization microscopy images in two color channels were obtained in two steps. Firstly, two-dimensional images of filaments were generated for both color channels. Secondly, positions of fluorescent molecules are generated and several localizations of each of these fluorophores were simulated. The filaments in one color channel were generated according to the two-dimensional wormlike chain model of Kratky and Porod [22]: All filaments consisted of 10 4 connected segments of 1 nm. The position of the central segment was randomly positioned within a circular region with a radius of FOV ffiffi ffi 2 p þ L=2, where FOV = 4 μm is the size of the field of view for the final image and L is the length of the filament. This circular region was deliberately chosen to be large enough to ensure a homogeneous and anisotropic distribution of filaments within the field of view. The orientation of the central segments was chosen randomly between −π and π. Angles between subsequent segments of the filament were taken from a normal distribution with standard deviation 1 nm/ξ, where ξ is the persistence length of the filament. The filaments in the second color channel were obtained in various manners: firstly by displacing each filament in the first channel over a fixed distance perpendicular to its orientation; secondly by independently simulating them in the same way as the filaments in the first channel but with a different persistence length; thirdly by displacing each segment perpendicular to their orientation with a sinusoidally modulated magnitude of the displacement such that the filaments in the second channel appeared to be twisted around those in the first channel. Finally, image representations of the filaments were made by counting the number of connecting points between segments in pixel bins of 5 nm in size, and convolving the resulting images with a Gaussian kernel with a full width at half maximum FWHM = 5 nm to account for the finite width of the filaments. Subsequently, localization datasets were simulated from the images of the filaments. A Poisson distributed number of N fluorophores was obtained with a relative density proportional to the pixel values in the filament images. The positions of these fluorophores were then displaced with a Gaussian probability density with FWHM = 5 nm to account for the size of the antibodies linked to the fluorophores. Each fluorophore was then assigned a random number of localizations M defined as the minimum of two quantities: M poisson and M geo drawn from a Poisson distribution with an expected value of 25 and a geometric distribution with an expected value of 11 respectively. Localizations were then finally displaced with a Gaussian probability density with standard deviation σ, where a different value of σ was randomly generated for each localization based on the expression in Eq 4 in Ref. [23,24] and using the following values: the number of signal photons per localization n ph (drawn from a geometric distribution with an expected value of 2000), background photons b (average of 9 × 9 Poisson distributed values with expected value of 1), and the PSF width σ a (Gaussian distributed with mean 0.3 × λ/ NA = 0.3 × 670/1.45 % 1.38 and standard deviaiton of 2% of the mean; this is roughly the distribution we obtain when fitting the PSF of Alexa Fluor 647 fluorophores and is in agreement with the range of previously suggested values [25]). All images in which the simulated datasets are visualized were obtained by rendering visualizations as Gaussian blobs with a kernel size equal to σ. Acquisition and processing of experimental data Sample preparation. Primary human umbilical vein endothelial cells (HUVECs) were purchased from Lonza and cultured on fibronectin (Sanquin)-coated dishes in EGM-2 medium, supplemented with SingleQuots (Lonza) at 37°C and under 5% CO 2 until passage 8. To stain vimentin and tubulin, HUVEC cells were grown for 24 hours on cleaned #1.5 coverslips in Medium 200 (Life technologies) with the addition of Low Serum Growth Supplement (LSGS) (Life technologies) at 5%. Immortalized Human Vascular Endothelial Cells (EC-RF24) [26] were grown in a mixture of HUVEC cell medium, 25% DMEM and 25% RPMI. NIH-3T3 mouse fibroblasts were maintained in DMEM supplemented with 10% fetal calf serum (FCS) as previously described [27]. The cells then were fixed with 10% MeS buffer (100 mM MeS, pH 6.9, 1mM EGTA and 1mM MgCl 2 ) and 90% methanol for 5 minutes on ice. After blocking with 5% Bovine Serum Albumin (BSA) for 1 hour, HUVEC and EC-RF24 cells were incubated with rabbit anti-tubulin polyclonal antibodies (Abcam) and mouse anti-vimentin monoclonal antibodies (Clone V9-Dako) for 1 hour. NIH-3T3 mouse fibroblasts were stained with anti-tubulin antibody raised in mouse (Sigma-Aldrich) and rabbit monoclonal antibody against vimentin (GeneTex). Subsequently all the cells were incubated with goat anti-rabbit and goat anti-mouse antibodies (Alexa 488, Alexa 647, Invitrogen) for 30 minutes. All the fixation and staining steps were done at room temperature. Control experiments were also performed where the fluorophore types labeling the secondary antibodies were swapped to rule out color-related artefacts. In the case of actin and keratin, primary keratinocytes isolated from newborn (1-3 day old) plectin deficient mice were kindly provided by Prof. Sonnenberg (NKI, Amsterdam, the Netherlands) [28]. Glutaraldehyde fixation was used to preserve both keratin and actin structure. Briefly, this fixation consisted of a first incubation step in 0.3% glutaraldehyde + 0.25% Triton in cytoskeleton buffer (10 mM MES pH 6.1, 150 mM NaCl, 5 mM EGTA, 5 mM glucose, and 5 mM MgCl 2 ) for 2 min. and a second step with 0.5% glutaraldehyde in the same buffer for 10 min. Subsequently, the sample was treated with freshly made 0.1% NaBH 4 in PBS. After fixation, samples were extensively washed with PBS and blocked with 5% BSA for 40 minutes. Staining was performed with rabbit anti-keratin 14 polyclonal antibody (Covance) and Phalloidin conjugated to Alexa Fluor 488 fluorophores (Invitrogen). Samples were incubated with a goat anti-rabbit secondary antibody labeled with Alexa Fluor 647 fluorophores (Invitrogen) afterwards. All the steps were performed at room temperature. Control experiments were also performed where the Phalloidin was labelled with Alexa Fluor 647 and the goat anti-rabbit antibodies with Alexa Fluor 488 to rule out color-related artefacts. Details will be described elsewhere. Before imaging, a waiting time of 30 min. was observed to allow the sample to stabilize and avoid initial drift. Images were then taken in TIRF mode at 100 frames per second with image sizes of 180 × 180 or 400 × 400 pixels; the backprojected pixel size was 100 nm. For all datasets, images with 642 nm illumination were acquired first. Localization analysis of experimental data. The acquired movies were processed by estimating fluorophores' positions using a fast algorithm [29] on a Quadro 5000 GPU (NVIDIA). The method for finding candidate regions of interest for position estimation has been documented in the literature [30]. Localizations corresponding to the same activation event were subsequently combined by grouping spatially nearby localizations (i.e. less than three times the sum of the localizations' precisions apart) in subsequent frames into single localization events. The center position of the grouped localizations was determined as the weighted average of the localizations with the inverse of the squared localization precisions as weights. Localizations were then filtered based on the number of signal photons per localization event and the PSF width. Subsequently, localizations were corrected for lateral stage drift using frame-by-frame cross-correlation, as documented elsewhere [31,32]. All images in which the experimentally obtained localizations are visualized were obtained by rendering visualizations as Gaussian blobs with a kernel size equal to the estimated localization precision. Pixels whose values were in the highest 2% (5% for images of actin and keratin) of all non-zero pixels were clipped to obtain sufficient contrast for display, and subsequently all intensities were linearly stretched between 0 and 255. Color channel registration. Localizations of the Alexa Fluor 647 fluorophore (red) channel were mapped onto the Alexa Fluor 488 fluorophore (green) channel using affine mapping. This mapping was estimated in a least squares estimation procedure with 8 different datasets of (in total 448) fluorescent beads visible in both color channels. Briefly, 100 nm TetraSpeck microspheres (T7284 blue green orange and dark red, Life Technologies) were diluted to a ratio of 1:100 and dried on ultraclean coverslip. The bead-dried coverslips were mounted on the microscope with 500 μL of MQ water and imaged on 8 different fields of view where beads were well separated. The beads were localized using the same algorithm as above. The target registration error of this mapping procedure was determined to be 16 nm (by leaving one of the recordings at a time out when computing the mapping so that it can be independently used to assess the error) [33]. Simulated datasets To demonstrate the proposed co-orientation measurement method, we simulated two-color localization microscopy datasets of samples with filament networks in both channels with a well-defined relationship between them. As a first example, we used a sample with 200 filaments with a persistence length ξ = 5 μm in the red color channel, labeled with 10 4 fluorophores in total; each of these filaments was accompanied by a filament in the green color channel at a fixed distance of 50 nm. This resulted in the dataset shown in Fig 2a, and the corresponding co-orientation plot of the generalized cross-correlation function c(r, Δϕ) in Fig 2b (for a scale s o = 200 nm for the orientation analysis). The plot shows the distance r between the localizations in both color channels on the vertical axis and the orientation difference ϕ between the filaments to which those localizations belong on the horizontal axis. The plot shows a clear peak at distance of approximately 50 nm and an orientation difference close to 0, confirming that filaments are accompanied by another filament at a distance of 50 nm in the other color channel. The enhanced correlation for larger angles ϕ is caused by the finite size of the orientation selective filters: when filaments cross or come in close proximity to each other, the filters give a non-zero response for orientations other than those of the filaments themselves. For larger distances r > 200 nm, c(r, ϕ) decays to a value of 1, meaning that filaments at those distances apart appear statistically independent from each other. As a second in-silico example, we used a sample in which there was no relationship between the filaments in both color channels. Unlike the previous example, the filaments in the green channel were now independently generated, but with a persistence length ξ = 1 μm. A representative example of a result under this condition (out of n = 500 simulations) is visualized in Fig 2c and The third simulation example serves to illustrate the importance of the scale of the orientation analysis. For this example, 50 filaments labeled with 5,000 fluorophores were simulated for the red channel as before. The filaments in the red channel were twisted around the green filaments with a maximum separation of 50 nm and with a periodicity of one twist per 300 nm. The resulting dataset is visualized in Fig 3a. Co-orientation plots for these data were computed for scales s o = 50 nm and s o = 500 nm for the orientation analysis, which are shown in Fig 3b and 3c respectively. The plot for s o = 50 nm shows two peaks at orientation differences of about ±40°, whereas the plot for s o = 500 nm only has a single peak at ±0°. Thus these plots express how indeed the filaments in both channels display co-orientation at larger length scales, although at a shorter length scale there is a signature of the filaments crossing each other. This shows that the scale s o of the orientation analysis can itself be used as a separate dimension for the analysis of co-orientation in an extensive co-orientation assay. The shortest length scale for which the orientation analysis could be meaningfully applied is determined by the resolution of the images [14]; at shorter length scales the data do not contain enough information about the filaments for an accurate analysis. Significance testing The question that arises upon inspection of the co-orientation plots is for which values of c(r, Δϕ) the co-orientation can be said to be statistically significant. To this end we computed the normalized anisotropic Ripley's K parameter K k (R) with R = 200 nm for the simulated datasets in Fig 2 to quantify the co-orientation strength. Subsequently, we applied the significance test outlined in the materials and methods section, which extracts the uncertainty in K k (R) by rotating the image in the green channel with respect to the red channel over 49 equally spaced angles θ between 0 and 2π and recomputing K k (R) for every rotation. The profiles of K k (R) as a function of the rotation angle θ for the datasets in We validated the proposed significance test by simulating 500 datasets where the filaments in both color channels were independent in the same manner as for the data shown in Fig 2c. For each of these simulations we applied the proposed significance test and computed the pvalue for the value of K k (R) at θ = 0 for R = 200 nm. We found that the p-values returned by the test were consistent with a uniform distribution between 0 and 1 (see S4 Fig): a one-sample two-sided Kolmogorov-Smirnov test revealed no significant difference at a 0.05 significance level (p = 0.47). This is exactly what is required, as the returned p-values should report the probability of obtaining values of K k (R) larger than the one being tested if the null hypothesis is true. Additionally, the assumption that K k (R) is normally distributed was not rejected in a Shapiro-Wilks test at a significance level 0.05 (p = 0.42). However, 38 of 500 the simulated datasets had a p-value smaller than 0.05, which is significantly more than the expected 25, indicating Co-Orientation that the p-values obtained from the proposed significance test are not exact. This is attributed to the RMS error of 31% in the estimated standard deviation of K k (R), since the normality of K k (R) itself was not rejected. The test can still be used though, provided that a somewhat more conservative threshold than 0.05 is chosen for the p-value. Application to experimental data of cytoskeletal filaments We applied the co-orientation analysis to experimental data of tubulin and vimentin and of actin and keratin. Multicolor localization microscopy images of tubulin and vimentin were obtained from primary human umbilical vein endothelial cells. Fig 5a and 5c show two clear example results at stable cell edges, with tubulin in red and vimentin in green. The corresponding co-orientation plots in Fig 5b and 5d confirm the strong co-orientation effect that appears to be present. The effect appears stronger in b than in d, due to the lower density of the filaments which leads to a stronger apparent bundling of the filaments. Correspondingly, the coorientation strength parameter K k (R) for the selected circular ROI in Fig 5a is larger than that in the ROI in Fig 5c, which are respectively 0.22 and 0.12 for R = 500 nm; in both ROIs the coorientation is statistically significant (p ( 10 −3 ). The value of R = 500 nm was chosen here such that the K k (R) just incorporates the primary peak in the co-orientation plots in the analysis. The observed co-orientation could also just be seen when the co-orientation analysis was applied to the TIRF images of the cells shown in Fig 5a and 5c (see S7 Fig). Generally though, the higher resolution of SR microscopy is much more suitable, and often will be necessary, to detect the co-orientation between these intricate filament networks. Note that the filament networks in these images show a clear preferential direction in these cells. Local deviations from these global trends could be investigated for example by filtering out the dominant filament orientations in the orientation space representations of the tubulin and vimentin images. Alternatively, the co-orientation plot could be normalized with respect to its average value at each distance r in order to determine how the alignment changes with r independent of the colocalization. The observed co-orientation between vimentin and tubulin is not a universal feature of any image showing two types of filaments. Consider for example Fig 5e, which shows a localization microscopy image of actin (green) and keratin (red) obtained from plectin deficient keratinocytes. As opposed to the previous images of tubulin and vimentin, there is no apparent co-orientation between actin and keratin: the corresponding co-orientation plot in Fig 5f does not exhibit a strongly peaked correlation score for small distances and small relative angles between the actin and keratin filaments. Indeed, no significant co-orientation (p = 0.20) was found in a statistical significance test for R = 500 nm (p = 0.065 for R = 200 nm). To visualize how the co-orientation between filaments varies across the image, we evaluated the local co-orientation strength K k (R) in overlapping subregions of the image. The resulting values are then shown as an overlay in the blue color channel on top of the image of the filaments. Fig 6 shows an example of tubulin and vimentin filaments with this overlay for different values of R, with subregion sizes equal to 3R. The blue overlay effectively highlights regions with the strongest local co-orientation, where high densities of filaments with similar orientations are within a distance R from each other. Increasing R causes more filaments to positively contribute to K k (R). However, it also leads to a less localized evaluation of the co-orientation strength. Regions in the image with crossing filaments exhibit lower values, because locally there is evidence both for and against orientational alignment of the tublin and vimentin. An alternative visualization method that does not give this low response with crossing filaments is demonstrated in Fig 6d. In this method the cos(2ϕ) weight in the computation of K k (R) in Eq 7 is replaced by a cos 2 (ϕ) weight. This leads to more connected regions with high values in the blue channel, but this visualization also highlights regions with mere co-localization where filaments are not aligned. (a-c) Localization microscopy images of tubulin (red) and vimentin (green). Blue overlays show the local co-orientation strength K k (R) in order to highlight the regions with the strongest local co-orientation. Increasing R causes more filaments that are further apart from each other to contribute to K k (R), but also causes K k (R) to appear less localized. (d) The same image as (b), but with the cos(2ϕ) weight in the computation of K k (R) in Eq 7 replaced by a cos 2 (ϕ) weight. This provides a visualization in which crossing filaments do not cancel the contributions to the local co-orientation strength of parallel filaments. However, this visualization is also sensitive to regions with mere co-localization where filaments are not aligned. In larger images (i.e. of 18 × 40 μm), it was apparent that co-orientation between vimentin and tubulin occurred predominantly in the periphery of the cells, whereas at the center, close to the nucleus, co-orientation appeared substantially less. When we compared the right and left half of Fig 7a respectively, we found K k (R) = 0.11 (p ( 10 −3 ) and K k (R) = 2.9 × 10 −2 (p ( 10 −3 ) respectively for R = 200 nm. We next investigated whether co-orientation between tubulin and vimentin is a generic property of these filaments. We therefore compared data from HUVEC cells (Fig 7b) to data obtained from NIH-3T3 fibroblasts (Fig 7d), which also express both filament systems. Remarkably, little if any co-orientation was observed throughout the cell in these fibroblasts: for the ROI in Fig 7d we found no statistically significant co-orientation (K k (R) = 4.4 × 10 −2 and p = 0.14 for R = 200 nm). We also did not observe a difference between peripheral and more central parts of the cells. This may reflect lineage-dependency, i.e. a difference between endothelial cells and fibroblasts. We therefore also studied a cultured endothelial cell line, EC-RF24 (Fig 7c). Indeed, we observed significant co-orientation (K k (R) = 9.4 × 10 −2 and p ( 10 −3 for R = 200 nm), but both strength and extent of colocalization appeared less than in HUVEC cell (K k (R) = 0.24 and p ( 10 −3 for R = 200 nm). These results show that our analysis methods makes it possible to quantitatively address biological co-orientation. Associations between different filament systems have recently attracted significant attention and may either indicate the existence of physical crosslinks between the filaments [34] or, perhaps, reflect deposition of intermediate filaments following their transport along microtubuli [35]. Our analysis tools will enable addressing such questions in an unbiased and quantitative manner. Discussion In this work, we describe a framework for the quantitative analysis of co-orientation: the simultaneous co-localization and orientational alignment of structures in images. In this framework we consider generalized cross-correlation between color channels as a function of spatial separation and orientational difference of structures. Additionally we quantify the (local) co-orientation strength using an anisotropic Ripley's K parameter and use it to test the statistical significance of the co-orientation. Our co-orientation analysis sensitively and quantitatively describes spatial association between vimentin and microtubuli in HUVEC cells. Moreover, this association is cell-type specific and appears to occur predominantly in the cell periphery. Although the results presented in this manuscript are obtained using simulated and experimental localization microscopy datasets, the methods proposed here can be analogously applied to data obtained with other superresolution microscopy techniques as well as widefield and confocal microscopy data if the resolving power is appropriate for distinguishing the structures (e.g. filaments) in those images. The co-orientation measurement is affected to some extent by experimental factors such as autofluorescence and background fluorescence from out-of-focus structures, apparent blurring of structures by the imaging system (e.g. due to diffraction or localization error), cross-talk between color channels, noise, and stochasticity in the fluorescent labeling (see S1 Text for a detailed discussion). Particularly the localization error in localization microscopy and analogously the point-spread function in other microscopy techniques may have substantial effects on the measurement outcomes. Firstly, they will lead to a change in the effective scale at which the orientation of filaments is assessed. Secondly, they smear out the generalized cross-correlation function c Dx; D ð Þ, causing the peaks in the co-orientation plot to decrease in magnitude and shift to larger values of the distance between filaments. There are several practical aspects that merit attention when interpreting the outcome of the orientation measurement and significance test. Firstly, it is important to note that the measured co-orientation strength K k (R) may decrease if the density of co-oriented filaments in the field of view increases. This merits attention when comparing the measurement outcomes for different cells or cell lines if their filament densities are not similar. The co-orientation measurement could be made less sensitive by changing the average values per channel in the denominator of c Dx; D ð Þinto the root-mean-square values; however, this normalization has the important disadvantage of being sensitive to changes in noise levels, density of fluorescent labels on the filaments, or localization precision. Secondly, the density of filaments also affects the validity of the significance testing method. Its derivation assumes a Gaussian distribution of K k (R) under the null hypothesis, which may not hold if the number of filaments in the field of view is small. Furthermore, the accuracy with which the standard deviation of K k (R) is estimated under the null hypothesis also depends on the number of filaments in the field of view. Therefore it is recommended to consider a more conservative significance level than 0.05 when testing for statistical significance. Also, care should be taken with strong co-localization in the absence of co-orientation, as it violates the assumption of rotation invariance under the null hypothesis that is built into the test. Thirdly, if no statistically significant co-orientation is detected, this does not imply that no co-orientation effect is present. The likelihood of successfully detecting co-orientation depends on how different the co-orientation effect appears from random variations in the proximity and alignment of unrelated filaments. Stronger co-localization or alignment therefore increase the detection probability. In addition, the detection probability will be higher for larger numbers of filaments as random variations tend to average out more. Of course, imaging more samples will increase the probability of detection as well, provided that a suitable procedure for simultaneously performing multiple significance tests is used (e.g. false discovery rate control). The visualization schemes that were proposed either underemphasize co-orientation in regions with crossing filaments or overemphasize regions where co-localization with little orientational alignment is present. These visualization schemes may be improved in several ways. Firstly, a method for detecting regions with crossing filaments in both color channels could identify where each scheme is most appropriate. This could be achieved by a crossing detector per color channel and then feeding the output into a co-localization measure. Secondly, higher order terms in the Fourier series expansion of c Dx; D ð Þcould be used to describe the local geometry in regions with crossing filaments. For example, the term with cos(4ϕ) rather than cos(2ϕ) expresses co-orientation between a filament in one channel and one of two orthogonal filaments in the other channel. Finally, the quantitative approach presented in this manuscript was specifically focused on the analysis of co-orientation, i.e. the combination of co-localization of filaments and the alignment in their orientations. However, the quantitative framework presented here can be applied more generally to the analysis of co-localization in conjunction with other geometric properties, such as the curvature or length of filaments or diameter of filament bundles. The analysis would then entail the computation of the cross-correlation between color channels as a function of these geometric properties, possibly at multiple measurement scales. Deriving a scalar metric for the magnitude of the observed effect similar to K k (R) then allows for the assessment of the local effect size and testing of its statistical significance. Approaches such as these will be of great use for exploiting the wealth of information provided by superresolution microscopy images for studying the spatial arrangements of cytoskeletal filaments and associated proteins relative to each other. Supporting Information S1 Fig. The effect of filament separation and field of view size on co-orientation. Simulated datasets consisting of two parallel straight lines with a density of fluorophores of one per 8 nm. The datasets for (a) and (c) differ in the distance between the filaments, which is 50 nm and 200 nm respectively. (b) and (d) show that this causes a shift and decrease in the peak of the co-orientation plot. The decrease is due to the larger radius over which c Dx ; D ð Þis averaged; K k (R) for R > 200 nm would not be similarly affected. The datasets for (c) and (e) differ in the size of the field of view, resulting in an increase in the peak from the plot in (d) to the plot in Fig 5a and 5c were used for co-orientation analysis. The coorientation plots in (b) and (d) shows that the analysis can be applied to these TIRF images and does reveal the co-orientation between tubulin and vimentin (with a scale s o = 200 nm for the orientation analysis). However, the correlation scores are much lower due to the blurring effect of the point spread function (see S1 Text for a more detailed analysis). The higher resolution of localization microscopy is therefore much more suitable, and often will be necessary, to detect the co-orientation between these intricate filament networks. (EPS) S1 Text. Supporting theoretical analyses. First a theoretical derivation of the equations used in the significance testing methods is provided. Secondly an analysis is presented of the impact of various experimental factors on the accuracy of the co-orientation measurement. (PDF)
10,939.6
2015-07-10T00:00:00.000
[ "Biology", "Computer Science", "Physics" ]
Students’ productive strategies when generating graphical representations: An undergraduate laboratory case study Generating graphical representations is an essential skill for productive student engagement in physics laboratory settings, and is a key component in developing representational competency (RC). As physics lab courses have been reformed to prioritize student engagement in authentic scientific skills and practices, students experience additional freedom to decide what data to include in graphs and what types of graph(s) would allow for appropriate sensemaking towards answering experimental questions. With this, however, there is a dearth of PER literature highlighting the strategies students use while working to generate graphs using their own experimental data. This paper presents a case study analysis of a student group’s lab investigation to call attention to how students enact various productive strategies when working towards generating graphical representations in an introductory physics laboratory course. Results of this case study analysis identify three productive strategies students enact when working to generate graphs in lab settings, each of which is related to aspects of representational competency (RC): 1) identifying (potential) covarying quantities; 2) choosing representative data subsets suitable for representation; and 3) iteratively reducing data and generating graphs to assess graph’s viability in answering research questions. Our analysis also shows how students frequently refer back to their experimental goals and hypotheses when deciding what strategies to enact to generate graphs. Generating graphical representations is an essential skill for productive student engagement in physics laboratory settings, and is a key component in developing representational competency (RC). As physics lab courses have been reformed to prioritize student engagement in authentic scientific skills and practices, students experience additional freedom to decide what data to include in graphs and what types of graph(s) would allow for appropriate sensemaking towards answering experimental questions. With this, however, there is a dearth of PER literature highlighting the strategies students use while working to generate graphs using their own experimental data. This paper presents a case study analysis of a student group's lab investigation to call attention to how students enact various productive strategies when working towards generating graphical representations in an introductory physics laboratory course. Results of this case study analysis identify three productive strategies students enact when working to generate graphs in lab settings, each of which is related to aspects of representational competency (RC): 1) identifying (potential) covarying quantities; 2) choosing representative data subsets suitable for representation; and 3) iteratively reducing data and generating graphs to assess graph's viability in answering research questions. Our analysis also shows how students frequently refer back to their experimental goals and hypotheses when deciding what strategies to enact to generate graphs. Further distribution must maintain the cover page and attribution to the article's authors. I. INTRODUCTION Visually representing scientific data is a central component of scientific inquiry [1,2]. Stakeholders across STEM disciplines describe representing experimental data as an integral component of laboratory experimentation (e.g., Refs. [3][4][5]). Students should gain representational competency (RC) in multiple aspects of experimental data representation, including generating graphs and diagrams, identifying relevant features, and sensemaking with representations [5][6][7][8]. Generating graphical representations, a component of RC, is a scientific practice commonly utilized by professional physicists and is an essential skill associated with "thinking like a physicist." While a significant body of literature in the PER community has historically focused on student interpretation and sensemaking of graphical representations in lecture/studio settings (e.g., Refs. [9][10][11][12][13]), less scholarship focuses on how students generate graphs in laboratory settings using self-collected data [14]. To more effectively guide students in developing skills associated with generating graphical representations, instructors and researchers jointly require additional insight into the productive strategies students enact when working to generate graphical representations in laboratory courses, settings most closely associated with authentic scientific experimentation. In this paper, we ask the following research question: What productive strategies might students enact when working to generate graphical representations of self-collected data in physics laboratory course settings? A. Generating Representations: A Component of Representational Competency The ability to generate appropriate graphical representations is one component of representational competency (RC), which is defined as the "ability to appropriately interpret and produce a set of disciplinary-accepted representations of realworld phenomena and link these to formalised scientific concepts" [15]. Summarized from Kozma and Russell (2005), students should be able to generate appropriate representations and effective describe and use representations for a specific scientific purpose [16]. The ability to appropriately generate graphical representations has been shown to have numerous benefits to student learning of concepts and skills, though the extent of these benefits is still under scrutiny. For example, generating representations has been shown to increase conceptual learning and transfer in mathematics more than simple interaction with pre-generated representations [17]. As well, several studies have shown that generating representations within scientific domains leads to more productive mental model formations of the domain, leading to greater scientific inferencing and reasoning (e.g., [18,19]). Conversely, Nitz et al. (2014) results suggest a negative gain relationship between students generating representations and building conceptual knowledge [20], which refutes earlier stud-ies (e.g., [21,22]). Apparent is the lack of a conclusive understanding of how student-generated representations impact students' science conceptual and technical learning [23][24][25][26]. Due to this lack of clarity, in this study we treat development of graph generation RC as an individual component of learning to "think like a physicist", distinguishable from learning other RC components or scientific concepts [20]. B. Generating Representations in PER Historically, the PER community has focused on identifying and understanding how students interpret and sensemake with pre-generated representations (e.g., [9][10][11]). For example, McDermott, Rosenquist, and van Zee (1987) highlighted how undergraduate physics students commonly experience difficulty connecting graphs to physics concepts and to the real world [9]. More recently, relevant PER studies have broadened to focus on how students engage with multiple representations (e.g., [27,28]), how students' use of representations varies in specific learning contexts (e.g., [29,30], or how students choose and shift between different modes of representations (e.g., [31][32][33] [34]). However, few studies in PER have focused explicitly on understanding how students generate graphical representations, either manually (i.e., paper and pencil) or with computer software, even when this scientific practice is paramount to the field of physics. Eshach (2020) used intuitive rules theory [35] to develop a conceptual framework to understand challenges students encounter when generating graphical representations of kinematic phenomena [36]. They showed that students use simple intuitive rules, such as "same A -same B," to identify salient features of existing representations to make new representations for different purposes. Most closely related, Nixon et al. (2016) studied students' abilities to manually generate (by hand with paper and pencil) and interpret graphs during lab instruction [14]. Researchers scored students' hand-drawn graphs from lab activities to assess their quality and interpretation via bestfit lines. Their analysis showed that students in introductory physics lab courses could successfully generate and interpret graphs using best-fit lines, though this often occurred without connection to underlying physics concepts. Our study moves beyond prior PER studies in several ways. First, to highlight a lesser-studied aspect of students' RC, we investigate students' generation of graphical representations using self-collected data, rather than their interpretations of pre-generated graphs. By situating this study observationally in a laboratory course setting, we aim to better understand students' graph generation RC as it would naturally occur in authentic scientific inquiry. Second, our study occurs in a learning setting where students collect and maintain a large data corpus and use spreadsheet software to organize, manipulate, and represent their data, rather than using manual graphing techniques. Use of computer software for visual representation is a more common representational technique for students and professionals alike. II. CASE STUDY: SELECTION AND METHODOLOGY We provide a case study analysis of a student group's activity in a Fall 2019 (in-person) introductory physics for life sciences (IPLS) lab course at a research-intensive university in the western United States. In this course, students are expected to generate research questions and conduct two-or three-week independent investigations with minimal direct instruction from teaching or learning assistants (TAs or LAs, respectively). This case study comes from a larger project investigating the nature of student engagement with experimental data in physics laboratory settings [37]. To identify this case study group, we reviewed previously collected research data, including: 1) observational data from student groups, which included screen capture, video, and audio data; 2) students' submitted pre-investigation design plans, where they outline their plans for conducting their investigations' and 3) students' individual lab reports. The chosen group comprises four students: Pam, Andy, Neesha, and Chloe [38]. All four were non-freshmen students majoring in life or behavioral sciences and intended to enroll in post-graduate health science programs. We chose this group for several reasons. First, the group exhibited consistent verbal discussion related to graphical representations throughout their investigation. Second, students' interactions with TAs/LAs only involved general support and guidance, not direct instruction. Third, by comparing final lab reports with other students, the quality of this group's final graphical representations and experimental results was representative of the course population. We focus on the group's Lab 1 investigation, which involved studying the biological kinematics of five confined zebrafish. The group was provided a video of five zebrafish swimming in a roughly 1f t 2 tank; they qualitatively observed that the fish may be swimming faster when closer together. Their experiment focused on testing a hypothesis that confined zebrafish are antisocial; this hypothesis relied on observations of an inverse relationship between fish swimming velocity and fish-to-fish (f2f ) distance (the closer two fish are to each other, the faster they will swim). Our analysis used screen-capture data collected from the group's Lab 1 investigation, which had been previously coded for instances when students engaged in various experimental actions, including creating and modifying representations [37]. Subsequent narrative analysis focused on truncating the group's investigation into natural excerpts where students discussed and enacted strategies to generate graphs. III. RESULTS Our analysis begins after the group finished collecting data. Using manual tracking software, the group collected xy position-tracking data for all fish for the length of the video (∼ 10s), distance traveled per frame, instantaneous velocities, and various irrelevant data. The group spent roughly 1 hour per week engaging in active experimentation. Identifying (potential) covarying quantities Choosing (potential) covarying quantities to represent was the group's first strategy in moving towards generating a graph that would effectively test their hypothesis. The following narrative comes from a group conversation that occurred 35 minutes into Week 1 experimentation (Week 1 -35min). After the group finalizes data collection, Neesha shifts the group's attention to determining what they are graphing, including what data they should compare in their graph (Neesha: "What are we graphing? Are we doing the same thing from [the warm-up], or distance versus time or ...?"). The group's discussion quickly revisits their hypothesis' implied quantities (f2f distance and fish swimming velocity): In lines 3 and 4, Pam and Chloe acknowledge that their collected data's distance values are not the f2f distances they need. Neesha and Pam then respond that they need to identify an equation that can convert their x-y position tracked data into f2f distances: Here, the group implicitly agrees they need to determine f2f distance for various fish pairings and corresponding fish swimming velocities. After further discussion, the group calculates f2f distances for their first fish pairing (fish A and B), chosen based on observations that fish A and B were the closest two fish at any point in time. Overall, this excerpt highlights students' immediate efforts to identify (potential) covarying quantities they would need to test their hypothesis. Immediately after collecting data, students identified appropriate (potential) covarying quantities, even though their raw data did not include these quantities. Enactment of this strategy occurred without prompting from instructional staff, suggesting that students chose to identify these quantities of their own volition. Students were able to backward-plan from their needed covarying quantities to identify initial data to manipulate (e.g., via equations) to obtain the desired quantities. Choosing appropriate data samples for representation After calculating f2f distances for the A-B fish pairing, the group's next strategy was to choose appropriate data samples from their large dataset to include in their graph. Upon completing their calculations (Week 1 -51min), the group recognizes that further calculations could result in thirteen unique fish pairings in their analysis. Likely hesitant to engage with what they perceive as a large amount of data (Pam: "I just want to ... start over!"), the group begins discussing which fish pairings would be best to include in their representations. The group consults the TA, who says they can choose a representative sample that shows variation in f2f distances and velocities. The group takes this as permission they can reduce their dataset as long as they appropriately justify their decisions. Further discussion ensues, with students negotiating potential strategies of reducing their data to a representative subset to include in their graph. Andy suggests postponing selection of further pairings until they complete calculations and generates graphs for the first pairing (A-B) (Andy: "I think we can do the main ones and see what we get."). Chloe and Neesha propose using extreme cases, fish pairings that are closest and farthest at any point in time: 7 Neesha: If we were just concerned about them being close together and them being far ... does that make sense? Cause our claim [39] was kind of like, if they're closer, they're faster ... 8 Chloe: B and E are the farthest ... 9 Neesha: The farthest and slowest, does that make sense? Andy rebuts by proposing they could use a single pairing of interest and a "control" pairing to directly compare against (Andy: "Okay, I think we should do B and C and then a control of either ..."). Pam advocates they can use the minimal amount of data necessary to test their hypothesis effectively (Pam: "... we could actually just take two fish, we could analyze just two fish, and how their velocities change when they're farther versus when they're closer ..."). Likely recognizing the numerous potential strategies being offered without clear direction, Neesha reintroduces the group's initial hypothesis to reorient the discussion, using this to again argue for her choice of the extreme case fish pairings (Neesha: "So let's go back, so our claim is that if they're closer, they'll move faster, if they're apart, they'll move slower. So, if we just analyze the fishes that are closer together and the fishes that are farthest, then we can compare whatever we find, right?"). The group comes to an agreement on this strategy. Their final strategy was to identify which fish were closest and farthest to the original fish pairing (A-B); this culminated in their inclusion of four fish pairings, representing the fish pairings they observed closest and farthest to fish A and B. Notable in this excerpt is how the group self-identified and negotiated several different strategies for choosing a representative subset of their large (∼3,300 unique data points and thirteen potential covarying quantity comparisons) dataset that they could reasonably include in their graphical representation. These potential strategies included: 1) choosing two extreme case subsets of data that bookend all other data; 2) choosing the most representative data subset and a control subset with which to compare; and 3) choosing the minimal amount of experimental data necessary to create a graphical representation to test the hypothesis. Again, the group fre-quently revisited their initial experimental goal throughout discussion and used this to determine a productive strategy, eventually deciding to use a larger representative subset that included multiple extreme cases. Also notable is how all four students advocated for different potential strategies and made a consensus decision based on all potential strategies. Reducing data and iteratively generating graphs Beginning their Week 2 investigation time (Week 2 -3min), the group's next enacted strategy was to further organize and reduce their large dataset to prepare to generate their final graph. To orient readers, the group chose to limit their analysis to only the velocities of each fish at specific points in time -when it was at its maximum and minimum distance from its partner fish -not each fish's velocity throughout the video. The group begins by identifying maximum and minimum f2f distances and corresponding velocity values in their dataset and copying them to a new data table in Excel. During this, the group again refers back to how they should represent their organized data on a graph to test their hypothesis: 10 Andy: ... do you guys want to figure out how to graph that? 11 Chloe: Yeah, we can put that in one table, so ... so like, uh ... distance, so, first column [in the table] would be fish, and the distance ... between ... oh that's fine. Do we want to do farthest distance on one graph and closest distance on another graph? 12 Pam: I feel like we can do both the same since we're just looking at the relationship between distance and velocity ... At this point, the group's organized data table includes a column of fish pairings (f ish_pair, see [40]) and two columns of their maximum (d max ) and minimum (d min ) distance separations, respectively. Without explicit group agreement on the graphing method, Chloe highlights this data and clicks "Line Chart," creating the graph shown in Figure 1. Chloe recognizes that the graph is not appropriately repre- senting the covarying quantities they identified, since the xaxis is categorically organized by f ish_pair, not numerically by distance (Chloe: "Uh ... it's not graphing how I want it. I want these [fish pairings on x-axis] to be here [in the legend]."). Pam reiterates that they are attempting to generate a graph of (f2f ) distance and velocity (Pam: "... and then we'd have a chart of distance against velocity."). This prompts the group to recognize that they omitted fish velocities from their data table. The group locates the velocities that correspond to when each fish was closest or farthest from its paired partner and adds these values to their data table as two respective columns (v 1,max and v 1,min ). They then create a second version of their line graph incorporating f ish_pair, d max , and v 1,max . Their resultant graph again has f ish_pair as the categorical x-axis, with two lines plotting d max and v 1,max with respect to f ish_pair. Chloe again recognizes the error of f ish_pair on the x-axis, and the group begins to iteratively generate graphs using trial-and-error (Chloe: ''We're kinda getting there. Pressing every button we need!"), choosing different subsets of their data table and different types of representations (e.g., line, bar). Still without success after several iterations of generating different graphs, they seek guidance from the LA. During discussion, the LA asks what type of graph and data would support their hypothesis (LA: "Now, picture, if we had a graph that supported that, what would it look like?"), then prompts the group to consider using a scatterplot. The students then guide the LA as he roughly sketches their data by hand, with each fish velocity (y-variable) and its associated f2f distance (x-variable) as a point on the scatterplot. The group agrees that a scatterplot would be an appropriate representation but has hesitancy that it removes information about the fish pairing relationships. Additional discussion ensues and the group eventually decides the resultant graph outweighs the loss of the fish labels (Pam: "That would, like, I know we wouldn't label the fish, but that might still get us ... somewhere."). After reorganizing their data to have all f2f distances in one column and all corresponding velocities in another column, the group creates a final scatterplot, shown in Figure 2. This segment highlights how students utilized several strategies to organize their chosen data and create an appropriate graphical representation. Most apparent was their use of "trial-and-error" methods to iteratively organize and select different subsets of data for subsequent generation of graphical representations. Students' initial unsuccessful "trial-anderror" graph generation prompted the transition to a new productive strategy, introduced by an LA, where students discussed and helped sketch a simplified graph that would align with their hypothesis. By sketching what they expected their graph to look like if proving their hypothesis correct, the group was able to clarify how to organize their data and utilize the computer software to generate an appropriate graph-ical representation. This process also prompted students to omit some features (i.e., fish pairing labels) of their data at the expense of other features (i.e., scatterplot graph type) that better aided in answering the research question. IV. DISCUSSION This study identified three productive strategies students use when generating graphical representations with their collected data: 1) identifying appropriate (potential) covarying quantities; 2) choosing representative data subsets suitable for representation; and 3) iteratively reducing data and generating graphs. Overarching these enacted strategies, the group continually referred back to their hypothesis when determining what strategies would support their representational goals. Through numerous experimental steps to create an appropriate (but not necessarily ideal) graphical representation, the group's productive progression is evidence of students' RC [6,41]. We emphasize that these are not the only productive strategies enacted by students in these contexts, nor are they necessarily the most effective. This work brings up new research questions about whether there are larger connections between the strategies students enact to generate graphical representations and how the representation can foster sensemaking about the represented scientific phenomena. This study shows how students may utilize productive strategies to generate graphical representations of data from large complex datasets collected in undergraduate physics lab settings. Productive engagement with large datasets in physics lab courses is a new but growing learning goal in introductory physics lab courses; this analysis suggests that students maintain degrees of competency in this crucial skill, but still face challenges navigating large datasets in computer software when generating representations. Second, as has been described in prior literature, informal representational drawing may be productive in moving students along in their generation of formal scientific graphical representations. When the group struggled to create an appropriate representation during their iterative graphing, the LA prompted them to draw their hypothesized graph's general trend, allowing them to determine a more appropriate type of representation. Pedagogically, it may be beneficial to prompt students to create informal drawings of their intended graphs, as this may provide a more natural generative space while potentially limiting technological hindrances from computer software.
5,224.8
2021-10-10T00:00:00.000
[ "Physics", "Education" ]
Non-orthogonal joint block diagonalization based on the LU or QR factorizations for convolutive blind source separation This article addresses the problem of blind source separation, in which the source signals are most often of the convolutive mixtures, and moreover, the source signals cannot satisfy independent identical distribution generally. One kind of prevailing and representative approaches for overcoming these difficulties is joint block diagonalization (JBD) method. To improve present JBD methods, we present a class of simple Jacobi-type JBD algorithms based on the LU or QR factorizations. Using Jacobi-type matrices we can replace high dimensional minimization problems with a sequence of simple one-dimensional problems. The novel methods are more general i.e. the orthogonal, positive definite or symmetric matrices and a preliminary whitening stage is no more compulsorily required, and further, the convergence is also guaranteed. The performance of the proposed algorithms, compared with the existing state-of-the-art JBD algorithms, is evaluated with computer simulations and vibration experimental. The results of numerical examples demonstrate that the robustness and effectiveness of the two novel algorithms provide a significant improvement i.e., yield less convergence time, higher precision of convergence, better success rate of block diagonalization. And the proposed algorithms are effective in separating the vibration signals of convolutive mixtures. Introduction Blind source separation (BSS) deals with the problem of finding both the unknown input sources and unknown mixing system from only observed output mixtures.BSS has recently become the focus of intensive research work due to its high potential in many applications such as antenna processing, speech processing and pattern recognition [1][2][3].The recent successes of BSS might be also used in mechanical engineering [4][5][6][7].In these reported applications for tackling the BSS, there were two kinds of BSS models i.e., instantaneous and convolutive mixture models.The instantaneous mixture models with a simple structure have been described in many papers and books [8][9][10].However, when it came to deal with convolutive mixture signals, BSS might face a number of difficulties which seriously hindered its feasibility [11,12].There is currently an endeavor of research in separating convolutive mixture signals, yet no fully satisfying algorithms have been proposed so far. Many approaches have been proposed to solve the convolutive BSS (CBSS) problem in recent years.One kind of prevailing and representative approaches is joint block diagonalization (JBD) which can produce potentially more elegant solution for CBSS in time domain [13].In the article, we focus on the JBD problem, which has been firstly treated in [14] for a set of positive definite symmetric matrices.And then the same conditions also mentioned in [15].To solve the JBD problem, Belouchrani have sketched several Jacobi strategies in [16][17][18]: the JBD problem was turn into a minimization problem which was processed by iterative methods; as a product of Givens rotations, each rotation made a block-diagonality criterion minimum around a fixed axis.Févotte and Theis [19,20] have pointed out that the behavior of Jacobi approach was very much dependent on the initialization of the orthogonal basis and also on the choice of the successive rotations.Then they proposed some strategies to improve the efficiency of JBD.But there are also several critical constraints: the joint block-diagonalizer is an orthogonal (unitary in the complex case) matrix; the spatial pre-whitening is likely to lead to a larger error and, moreover, this error is unable to correct in the subsequent analysis.In [21] a gradient-based JBD algorithm have been used to achieve the same task but for non-unitary joint block-diagonalizers.This approach suffers from slow convergence rate since the iteration criteria possesses a fixed step size.To improve this shortcoming, some gradient-based JBD algorithms with optimal step size have been provided and studied in [22][23][24].However, these algorithms are apt to converge to a local minimum and have low computational efficiency.To eliminate the degenerate solutions of the nonunitary JBD algorithm, Zhang [25] optimized a penalty term based weighted least-squares criterion.In [26], a novel tri-quadratic cost function was introduced, furthermore, an efficient algebraic method based on triple iterations has been used to search the minimum point of the cost function.Unfortunately, this method exists redundancy value and arises error when the mixture matrix is inversed.Some new Jacobi-like algorithms [27,28] for non-orthogonal joint diagonalization have been proposed, but unfortunately cannot be used to solve the problem of block diagonalization. Our purpose, here, is not only to tackle the problem of the approximate JBD by discard the orthogonal constraint on the joint block-diagonalizer i.e., impose as few assumptions as possible on the matrix set, but also to propose the JBD algorithms characterized by the merits of simplicity, effectiveness, and computational efficiency.Subsequently, we suggest two novels non-orthogonal JBD algorithms as well as the Jacobi-type schemes.The new methods are an extension of joint diagonalization (JD) algorithms [29] based on LU and QR decompositions mentioned in [30,31] to the block diagonal case. This article is organized as follows: the JBD problem is stated in Section 2.1.The two proposed algorithms are derived in section 2.2, whose convergence is proved in Section 2.3.In Section 3, we show how to apply the JBD algorithms to tackle the CBSS problem.Section 4 gives the results of numerical simulation by comparing the proposed algorithms with the state-of-the-art gradient-based algorithms introduced in [23].In Section 5, the novel JBD algorithms are proved to be effective in separating vibration source of convolutive type, which outperforms JBD OG and JRJBD algorithms [19]. Let ∈ × be a set of matrices (the matrices are square but not necessarily Hermitian or positive definite), that can be approximately diagonalized as: where ∈ × is a general mixing matrix and the matrix ∈ × denotes residual noise, (⋅) denotes complex conjugate transpose (replace it by (⋅) in real domain).( = 1,⋅⋅⋅, ) is an × block diagonal matrix where the diagonal blocks are square matrices of any size and the off-diagonal blocks are zeros matrices i.e.: where , denotes the th diagonal block of the size × such that + +⋅⋅⋅ + = .In general case, e.g. in the context of CBSS, the diagonal blocks are assumed to be the same size, i.e. = × and where denotes the × null matrix.The JBD problem consists of the estimation of and ( = 1,⋅⋅⋅, ) when the matrices are given.It can be noticed that the JBD model remains unchanged if one substitute by Λ and by Λ ( Λ ) , where Λ is a nonsingular block diagonal matrix in which the arbitrary blocks are the same dimensions as .is an arbitrary block-wise permutation matrix.The JBD model is essentially unique when it is only subject to these indeterminacies of amplitude and permutation [24]. To this end, our aim is to present a new algorithm to solve the problem of the non-orthogonal JBD.The cost function with neglecting the noise term suggested in [23] is considered as follows: The above cost function can be regarded as the off-diagonal-block-error, our aim is to find a non-singular such that the ( ) is as minimum as possible.Where ‖⋅‖ is the Frobenius norm and stands for inverse of the matrix (in BSS context, serves as the separating matrix).Considering the square matrix = ( , ) ∈ × : where , for all , = 1,⋅⋅⋅, are × matrices (and + +⋅⋅⋅ + = ) and two matrix operators Bdiag(⋅) and OffBdiag(⋅) can be respectively defined as: Two novel joint block diagonalization algorithms based LU and QR factorizations Any non-singular matrix admits the LU factorization [30]: where and are × unit lower and upper triangular matrices, respectively.A unit triangular matrix represents a triangular matrix with diagonal elements of one.also admits the QR factorization: where is × orthogonal matrix.Considering the JBD model's indeterminacies, we note that any non-singular square separating matrix can be represented as these two types of decomposition. Here, we will implement the JBD in real domain (which is the problem usually encountered in BSS) i.e., in a real triangular and orthogonal basis.It is reasonable to consider the decompositions Eq. ( 4) or (5) and hence replace the minimization problem represented in Eq. ( 3) by two alternating stages involving the following sub-optimization: where = , , and denote the estimates of , and , respectively.Moreover, we adopt the Jacobi-type scheme to solve Eq. ( 6) and 7(a), 7(b) via a set of rotations. The Jacobi matrix of lower unit triangular is denoted as ( ) , where the parameter corresponding to the position ( , ) ( > ) i.e., ( ) equals the identity matrix except the entry indexed ( , ) is .In a similar fashion, we define the Jacobi matrix of unit upper triangular ( ) with parameter corresponding to the position ( , ) ( > ).In order to solve Eq. ( 6) and 7(a), we will firstly find the optimal ( ) and ( ) in each iteration.For fixed , one iteration of the method consists of 1. Solving Eq. ( 6) with respect to of ( ), and.Updating ← ( ) ( ) for all (U-stage) 2. Solving Eq. (7a) with respect to of ( ), and.Updating ← ( ) ( ) for all (L-stage) We herein note that the proposed two non-orthogonal JBD algorithms are all of abovementioned Jacobi-type, with the only differences on the adopted decompositions (LU or QR) and implementation details.Next, we give the details of the proposed algorithms.Following the Eq. ( 6), we have: where is the block dimension, , = 1, … , and For matrix , ( , ) denotes a row-vector whose elements are from the th row of indexed by , the is a row vector ( , ) is defined similarly.The computation of the optimal in Eq. ( 8) is: If = 0 or = 0 set = 0, i.e. ( ) cannot be reduced by the particular ( ).As for the lower triangular matrices ( ), we have similar result: where is the block dimension, , = 1, … , and > : is a row vector in which element satisfy ≠ ∈ [1: − 1, +: ], rounds the elements of to the nearest integers towards infinity: The computation of the optimal value in Eq. ( 10): ( ) cannot be reduced by the particular ( ).The computation of the optimal parameter requires solving a polynomial of degree 2 in the real domain, which is more effective than other JBD methods that need to solve a polynomial of degree 4, such as JBD OG , JBD ORG and JBD-NCG, etc. [22][23][24]. In the QR algorithm, we consider the QR decomposition of Β, hence the sub-optimization problem in the Q-stage in Eq. 7(b) is indeed an orthogonal JBD problems which can be solved by Févotte's Jacobi-type algorithm [19].Févotte indicated that the behavior of the Jacobi approach was very much dependent on the initialization of the orthogonal basis, and also relied on the choice of the successive rotations.Here, the algorithm is initialized with the matrix provided by joint diagonalization of , and consists of choosing at each iteration the couple ( , ) ensuring a maximum decrease of criterion ( ).Now that we have obtained the U-stage and L-stage (Q-stage) for the proposed algorithm, we loop these two stages until convergence is reached.In addition, we note that there would be several ways to control the convergence of the JBD algorithms.For example, we could stop the iterations when the parameter values in each iteration of the U-stage or L-stage (Q-stage) are small enough, which indicates a minute contribution from the elementary rotations, and hence convergence.We may as well monitor the sum off-diagonal squared norms ( ) in iteration, and stop the loops when the change in it is smaller than a preset threshold.The values of ‖ − ‖ < between two successive complete runs of U-stage and L-stage are usually used as a terminate criteria.Here, we stop the iteration when the values = ∑ ‖ ( )‖ ∑ ‖ ( )‖ ⁄ < , which can reflect relative change between off-block diagonal and block diagonal.And this criterion can be adapted to all of iterative methods for solving JBD problem, which can also give a more intuitive comparison between these methods.Therefore, this terminate criteria is much more rational and effective.In the following context of our manuscript, we will use one of the following terminate criteria (st1) (mentioned at section 4).We name the novel JBD approaches based on LU and QR factorizations as LUJBD and QUJBD, respectively, and summarize them as following: 1. Set = . 2. U-stage (R-stage): set = , for 1 ≤ < ≤ Find = argmi ( ( )) from Eq. ( 9) Update ← ( ) ( ) and ← ( ) If the terminate criteria from (st1), (st2) or (st3) isn't satisfied completely, then ← ( ← ) and go to 2, else end.We replace each of these ( − 1) 2 ⁄ dimensional minimization problems by a sequence of simple one-dimensional problems via using triangular and orthogonal Jacobi matrices.Note that for updating , the matrix multiplications can be realized by few vectors scaling and vector.In additions, this will cost fewer time than other method of non-orthogonal JBD [22][23][24].And the existence and uniqueness of joint block diagonalization of this cost function has been proved in [20]. At each iteration of the algorithm, the matrix obtained after rotations is thus 'at least as diagonal as' matrix at previous iteration.Since every bounded monotonic sequence in real matrix domain, the convergence of our algorithm is guaranteed. Application to CBSS The problems of convolutive BSS (CBSS) occur in various applications.One typical application is in blind separation of vibration signals, which is fully studied in this paper for detecting the solution of the CBSS problems.The CBSS consists of estimating a set of unobserved source signals from their convolutive mixtures without requiring a priori knowledge of the sources and corresponding mixing system.Then the CBSS can be identified by means of JBD of a set of covariance matrices.We consider the following discrete-time MIMO model [25]: where is the discrete time index, = 1, … , , denotes FIR filter's length.( ) = [ ( ),⋅⋅⋅, ( )] denotes source signal vector with the source numbers are , and ( ) = [ ( ),⋅⋅⋅, ( )] is the mixing signal vector obtained by observation signals.In the mixing linear time-invariant system, the matrix-type impulse response ( ) = [ ( ),⋅⋅⋅, ℎ ( )] consists of channel impulse responses ℎ ( ) ( = 1, … , , = 1, … , ).Aiming to the received signal on the th array element, we take the + 1 sliding window and constitute a column vector: Then putting the array element processed by sliding window together and defining the observed signal vector: Hence: where ( ) = [ ( ), ( ),⋅⋅⋅, ( )] , and: is block element of which matrix dimension is ( + 1) × ( + ), and: . The following assumptions concerning the above mixture model Eq. ( 14) have to be made to ensure that it is possible to apply the proposed algorithms to CBSS [25]. Assumption 2. The sensor noises ( ) are zero mean, independent identically distributed with the same power .The noises are assumed to be independent with the sources. Assumption 3. The mixing matrix is assumed as column full rank.This requires that the length of the sliding window satisfies ( + 1) ≥ ( + ). Assumption 1 is the core assumption.As is shown in [25], this assumption enables us to separate the sources from their convolutive mixtures by diagonalizing the second-order statistic of the reformulated observed signals (this will be addressed below).Assumption 2 enables us to easily deal with the noise and Assumption 3 guarantees that the mixing system is invertible, therefore it is a necessary condition that the source signals can be completely separated. Under these assumptions, the spatial covariance matrices of the observations satisfy: where ∈ [0,1,2, … , − 1] ( = 1, … , ) is the successive time delays, ( , − ) is computed according to [26].It can be deduced from the above assumptions that in Eq. ( 15) the matrices take the following forms, respectively: where the block matrices in ̅ ( ) and ̅ ( ) have the following form: where (− , − ) = ( ( − ) ( − )) , ( ) have the similar form, which is the ( + ) × ( + ) matrix.According to the Eq. ( 15), a group of matrices = , = 1, … , which can be block diagonalized, and satisfy = , have diagonalization structure.Hence, The JBD method mentioned in section 2.2 can be used to solve CBSS problem.Once the joint block diagonalizer is determined, the recovered signals are obtained up to permutation and a filter by: It is worth mentioning that the indeterminacies of amplitude and permutation exist in JBD algorithms correspond to the well-known indeterminacies in CBSS.The correlation matrices ̅ ( ) is actually replaced by their discrete time series estimate.To acquire a good estimate of the discrete correlation matrices, we may divide the observed sequences (the output of the reformulated model ( 15)) into the appropriate length of the sample. Numerical simulations Simulations are now provided to illustrate the behavior and the performance of the proposed JBD algorithms (LUJBD, QRJBD).We will also compare the proposed algorithms with the JBD OG , JBD ORG in the robustness and efficiency by generating random dates.To achieve these purposes, a set of real block-diagonal matrices (for all = 1, … , ) are devised from random entries with a Gaussian distribution of zero mean and unit variance.Then, random noise entries with a Gaussian distribution of zero mean and variance will be added on the off-diagonal blocks of the previous matrices .A signal to noise ratio can be defined as = 10log(1/ ).To measure the quality of the different algorithms, the following performance index is used [22]: where ( , ∈ 1,⋅⋅⋅, ) denotes the ( , )th block matrix of = .This index will be used in the CBSS, which can take into account the inherent indetermination of the BSS problem.It is clearly that the smaller the index performance ( ) , the better the separation quality.Regarding to the charts, ( ) is often given in dB i.e., ( ) = 10log( ( )).In all the simulations, the true mixing matrix or has been randomly chosen with zero mean and unit variance. In Fig. 1 and Fig. 2, we focus on the exactly determined case = 9, = 3, = 20, = 60 dB, the results have been averaged over 100 runs.Fig. 1 represents the percentage of successful runs, where a run is declared successful w.r.t. the following criteria satisfy: < 10 , (st1), (st2).Fig. 2 represents the average running time per successful convergence.Comparing the LUJBD and QRJBD approaches with the state-of-the-art JBD OG , and JBD ORG. , we can confirm that the approaches proposed in Section 2.2 improve the performance of JBD better than the gradient-based methods: the LUJBD and QRJBD methods converge to the global minimum more frequently, see Fig. 1, and faster see Fig. 2 than JBD OG , JBD ORG .Under the same terminated criteria, it can be observed that the LUJBD and QRJBD methods also outperform the JBD OG , JBD ORG methods, and the LUJBD method show the best performance.We can also conclude that the sensitivity of the different convergence termination criteria for different JBD methods is diverse and, moreover, the percentage of successful runs and average running time are also varying.In other words, we should choose appropriate terminate criteria which is able to obtain the accuracy of block diagonalization, goodness of success rate and convergence speed. In Fig. 3 we focus on the exactly determined case = 9, = 3, = 20, = 60 dB, the results have been averaged over 100 runs.Because various approaches have different iteration time of each step, Here, we consider all methods converge to the same time, which is more reasonable than converge to certain iteration steps mentioned in [23,25].The evolution of the performance index versus the convergence time shows that the convergence performance of the LUJBD and QRJBD methods is better than the JBDOG and JBDORG method.The LUJBD and QRJBD algorithms cost less time when performance index reaches a stable convergence level, and have smaller value of performance index when all algorithms converge to same time.In other words, the BSS methods proposed in this paper possess less convergence time, higher precision of convergence, faster convergence speed.In Fig. 4, we discuss the number of the matrices how to affect the performance.The results have been averaged over 100 runs.We devise the same stop criteria i.e., one of the terminate criteria (st1), (st2) and (st3) is satisfied.We set = 9, = 3, = 60 dB.The following observations can be made: the more matrices to be joint block-diagonalized, the better performance we can obtain.But the computational cost also increases when the number of matrix rises.Therefore, the choice of matrix number should combine the accuracy of JBD algorithms with complexity of JBD algorithms.From Fig. 4, the matrix number 20 is a better choice.The LUJBD algorithm with better convergence turned out to be slightly superior to other JBD algorithms.With the same stop criteria and other assumptions in Fig. 4, one can observe from Fig. 5 that when the SNR grows, the average performance of each algorithm becomes better except few fluctuation points.The noise sensitivity of LUJBD is slightly higher that the remaining three kinds of methods, however, for a given value of SNR, the average performance index of LUJBD and QLJBD is always better than that of two gradient-based methods. Finally, in Fig. 6 and Fig. 7, we study separation performance versus matrix dimension and the block dimension for both algorithms ( = 20, = 60 dB), the results have been averaged over 100 runs.One can observe that in the same block dimension case, the larger the matrix dimension , the weaker the estimation accuracy of mixing matrix.And in the same matrix dimension case, the larger the block dimension , the better the estimation accuracy of mixing matrix.Therefore, we can improve the performance index by increasing the number of the matrix and the block dimension when the dimension of target matrix increases.=60 and = 20 matrices Applying CBSS to the vibration source separation The experimental model is a double-stiffened cylindrical shell depicted in Fig. 8, which is used to simulate the cabin model.Underwater vibration tests of the double-stiffened cylindrical shell were carried out in anechoic water pool with a length of 16 meters, a width of 8 meters and a height of 8 meters.In the double-stiffened cylindrical shell, three exciters were arranged in the front part (No. 1 excitation source), middle part (No. 2 excitation source), rear part (No. 3 excitation source), respectively, which were used to simulate vibration sources of the internal equipment.Twenty-nine vibration acceleration sensors were arranged in the inner shell, and four accelerometers containing abundant vibration information were arranged in the vicinity of each excitation point.The location of exciters and acceleration sensors were shown in Fig. 9.Only the vertical excitation and response were considered in this test, and the model was located underwater 3.75 m.During the test, three exciters were controlled on the shore, and each exciter was turned on separately or multiple exciters were operated simultaneously according to different test requirements.Three exciters launched a continuous sinusoidal signal with different excitation frequencies and same energy, the frequency was 5 kHz, 4 kHz, 3 kHz, corresponding to No. 1 exciter, No. 2 exciter and No. 3 exciter.The vibration data was collected when the exciter was in stable operation.The sampling frequency was 16384 Hz and the sampling time was 10 s. Fig. 11(d), (e), (f) show the mixture signals obtained by three sensors (17,20,23) on the inner shell when all of the exciters act simultaneously.It is obvious that the mix-signals with mutual spectrum aliasing are not able to represent real vibration characteristics and be utilized directly.We can also demonstrate that a mixture of vibrations is most often of the convolutive type which is not prone to be tackled, and moreover, the independence among the vibration sources is often not satisfactory strictly.Therefore, it is difficult to separate mechanical vibration source using In Fig. 10, the evolution of the performance index versus the convergence time shows that the convergence performance of the LUJBD and QRJBD methods are superior to the JBDOG and JRJBD methods [20].The LUJBD and QRJBD algorithms cost less time when < -20 dB, and have smaller performance index when the convergence time is same.In other words, the BSS methods proposed in this paper possess higher precision of convergence, faster convergence speed.The low computational accuracy and efficiency of the latter two algorithms are mainly due to following reasons: (1) the JBDOG algorithm generally suffers from slow convergence rate and is apt to converge to a local minimum; and the accuracy of blind source separation is often hindered by the inversion of ill-conditioned matrices.( 2) the joint block-diagonalizer of JRJBD algorithm is an orthogonal matrix, the spatial pre-whitening which is likely to lead to a larger error need to be applied.Moreover, this error is unable to correct in the subsequent analysis.11.Separation results for known channel-Sources: a) , b) , c) .Mixtures: d) , e) , f) .Separation sources for LUJBD: g) , h) , i) . Conclusions In this article, to solve the convolutive BSS (CBSS) problem, we present a class of simple Jacobi-type JBD algorithms based on the LU or QR factorizations.Using Jacobi-type matrices we can replace high dimensional minimization problems with a sequence of simple one-dimensional problems.The two novel methods named LUJBD and QRJBD are no more necessarily orthogonal, positive definite or symmetric matrices.In addition, we propose a novel convergence criteria which can reflect relative change between off-block diagonal and block diagonal.And this criterion can be adapted to all of iterative methods for solving JBD problem, which can also give a more intuitive comparison between different methods.The computation of the optimal parameter requires solving a polynomial of degree 2 in the real domain in LUJBD and QRJBD methods, which is more effective than other JBD methods that need to solve polynomial of degree 4. Therefore, the two new algorithms will cost fewer times than other non-orthogonal JBD methods, and moreover, the convergence of these two algorithms is also guaranteed. A series of comparisons of the proposed approaches with the state-of-the-art JBD approaches (JBD OG and JBD ORG ) based on gradient algorithms are implemented by varieties of numerical simulations.The results show that the LUJBD and QRJBD methods converge to the global minimum more frequently, and faster than JBD OG , JBD ORG .Choosing appropriate terminate criteria is beneficial to obtain the accuracy of block diagonalization, goodness of success rate and convergence speed.It can be readily observed that the more target matrices selected to be joint block-diagonalized, the better performance we can obtain.But the computational cost also increases when the number of matrix rises.Therefore, the choice of matrix number should combine algorithm accuracy with complexity.We can also improve the performance index by increasing the block dimension and decreasing the matrix dimension.Then the two novel JBD algorithms and the JBD OG , JRJBD methods for separating practical vibration sources are studied.We can conclude that the convergence performance and accuracy of the LUJBD and QRJBD methods are superior to the JBD OG and JRJBD methods.Finally, Comparison the recovered Fig. 6 .Fig. 7 . Fig. 6.Average versus the convergence time with different M and L for LUJBD algorithms.=60 and = 20 matrices Fig. 7. Average versus the convergence time with different M and L for QLJBD algorithms.=60 and = 20 matrices -ORTHOGONAL JOINT BLOCK DIAGONALIZATION BASED ON THE LU OR QR FACTORIZATIONS FOR CONVOLUTIVE BLIND SOURCE SEPARATION.LEI ZHANG, YUEYUN CAO, ZICHUN YANG, LEI WENG traditional source separation methods.However, we propose the Jacobi-type JBD algorithms based on the LU or QR factorizations in this article, which can overcome above shortcomings effectively.To ensure the stability of the solution, the terminate criteria = ( − ) < 10 ( = 0, … ,9) is selected.We select observed signals = 5 i.e., sensors 17, 20, 23, 26, 29,and source signals = 3.The model parameters including , , are selected to guarantee that the solution accuracy satisfy ≤ -35 dB.The filter length = 13, the sliding window = 17 and a set of covariance matrices = 30 with a time lag taking linearly spaced valued. Fig. 11 ( a), (b), (c) represents source signals acquired by the acceleration sensors near excitation point (Here, we choose acceleration sensors 1, 8, 14 which have a higher signal-to-noise ratio) when each of the exciter act respectively.Comparison between the recovered primary sources -see Fig.11(g), (h), (i) for LUJBD model and (j), (k), (l) for QRJBD model -with the true one shows that the proposed separation algorithms only subjected to indeterminacies of the permutation and amplitude are effective. Fig. 8 .Fig. 9 .Fig. 10 . Fig. 8.The experimental cabin model primary sources with the true one demonstrates the validity of the proposed algorithms for separating the vibration signals of convolutive mixtures. 2539.NON-ORTHOGONAL JOINT BLOCK DIAGONALIZATION BASED ON THE LU OR QR FACTORIZATIONS FOR CONVOLUTIVE BLIND SOURCE SEPARATION.LEI ZHANG, YUEYUN CAO, ZICHUN YANG, LEI WENG 2539.NON-ORTHOGONAL JOINT BLOCK DIAGONALIZATION BASED ON THE LU OR QR FACTORIZATIONS FOR CONVOLUTIVE BLIND SOURCE SEPARATION.LEI ZHANG, YUEYUN CAO, ZICHUN YANG, LEI WENG 2539.NON-ORTHOGONAL JOINT BLOCK DIAGONALIZATION BASED ON THE LU OR QR FACTORIZATIONS FOR CONVOLUTIVE BLIND SOURCE SEPARATION.LEI ZHANG, YUEYUN CAO, ZICHUN YANG, LEI WENG
6,843.2
2017-08-15T00:00:00.000
[ "Computer Science" ]
Enhanced Soft Magnetic Properties of Iron-Based Powder Cores with Co-Existence of Fe 3 O 4 – MnZnFe 2 O 4 Nanoparticles An iron-based soft magnetic composite with Fe3O4-MnZnFe2O4 insulation coating has been prepared by powder metallurgy method. This work investigated the microstructure and magnetic properties of Fe/Fe3O4-MnZnFe2O4 powder cores. Scanning electron microscopy (SEM) coupled with an energy dispersive spectrometry (EDS) analysis indicated that the Fe3O4 and MnZnFe2O4 nanoparticles were uniformly coated on the surface of Fe powders. The co-existence of Fe3O4 and MnZnFe2O4 contributes to the preferable distribution of nano-sized insulation powders and excellent soft magnetic properties of soft magnetic composite (SMC) with high saturation magnetization Ms (215 A·m2/kg), low core loss (178.7 W/kg measured at 100 kHz, 50 mT), and high effective amplitude permeability of 114 (measured at 100 kHz). Overall, this work has great potential for realizing low core loss and outstanding soft magnetic properties of Fe-based powder cores. Introduction Soft magnetic composites (SMCs), as reported by H. Shokrollahi in 2007 [1], are considered as advanced electrical inductance materials.SMCs can be described as ferromagnetic powders that are surrounded by a thin but electrically insulating layer and pressed into cores by easy powder metallurgy methods.Compared with the conventional laminated steel cores, SMCs offer enormous advantages, such as three-dimensional isotropic magnetic properties, low eddy-current core loss at high frequency, and high adaptability to complex machine design.Hence, it has replaced the laminated steel cores in most fields of electric motors, transformer cores, power switching inductors, online noise filters, and chokes [2][3][4][5]. With the increasing demands of electromagnetic devices at high frequency, improving the magnetic performance of SMCs is of crucial importance [6].The challenge in current SMCs is obtaining low eddy-current core loss at high frequency, high magnetic permeability, and high thermal stability of the materials [7,8].Eddy-current loss can be minimized in several ways.The latest technology is to increase the electrical resistivity of the insulating coatings [1].Besides, to improve the magnetic permeability, the amount of insulation coatings on the surface should be minimized, and the ferromagnetic powder content maximized [9].Thus, many studies have been carried out to develop suitable insulating materials and proper manufacturing methods in order to improve the soft magnetic properties of SMCs.Generally, the insulating materials are divided into organic and inorganic materials.Organic materials such as epoxy resin [10], parylene [11], and phenolic resin [12] exhibit a satisfactory performance in increasing the resistivity of iron powders, but their application under high temperatures is limited due to the nature of high polymer materials.Inorganic coating can therefore be a better choice, since, if the proper amount of coating material is applied, they have high thermal stability and improved soft magnetic properties.Inorganic coating materials are divided into two different types.Ceramic materials which are dia-or anti-ferromagnetic constitute one type.These materials-such as Al 2 O 3 [13], SiO 2 [14], and ZrO 2 [15]-which can significantly decrease the core loss at a high frequency, also decrease magnetic permeability by reducing the proportion of magnetic materials.Materials that exhibit ferro-or ferrimagnetism, which can ultimately affect the core loss and improve the magnetic permeability compared with the ceramic coating materials, constitute the other type.It was found that the use of nanoparticles of Fe 3 O 4 resulted in an improvement in soft magnetization [16].However, Fe 3 O 4 is a semiconductor with a resistivity of 10 -2 Ω•cm, so the eddy-current core loss will increase quickly as the frequency increases [17].MnZnFe 2 O 4 ferrite has been used in many applications such as radio frequency circus and transformer cores due to its high electrical resistivity. In this study, a novel SMC which was covered by Fe 3 O 4 and MnZnFe 2 O 4 nanoparticles was prepared.The Fe 3 O 4 nanoparticles were dispersed uniformly in the as-prepared MnZnFe 2 O 4 gel to form a uniform coating on the iron powders.Moreover, the simultaneous incorporation of Fe 3 O 4 and MnZnFe 2 O 4 resulted in the high magnetic saturation, high permeability, and reduced eddy-currency core loss. Materials Pure atomized iron powder with an average particle size of 75 µm was supplied by An Gang Industries Co., Ltd.(Anshan, China), Nano-Fe 3 O 4 powders (purity > 99.99%,20 nm) were purchased from Aladdin biochemical Polytron Technologies Company (Shanghai, China) and used without further treatment.MnZnFe 2 O 4 manufactured by a novel sol-gel auto-combustion method was used as the insulation material.Analytically pure chemicals manganese nitrate solution (50 wt.% in H 2 O, Aladdin), zinc nitrate hexahydrate (99%, Aladdin), and iron nitrate nonahydrate (99%, Aladdin) were dissolved in deionized water and citric acid (99.8%,Aladdin) to prepare the soft magnetic MnZn ferrites. Preparation of As-Prepared MnZn Ferrite Sol Manganese nitrate solution, zinc nitrate hexahydrate, and iron nitrate nonahydrate were dissolved in deionized water and citric acid.Then the mixture was heated to 75 • C in an oil bath and mixed evenly in a three-necked round bottom flask under magnetic stirring.Ammonia was subsequently added as the catalyst to adjust the PH value to about 5 and shorten the gel time.The resulting mixture was maintained under magnetic stirring at 75 • C for 4 h. Composite Production The Nano-Fe 3 O 4 powders were mixed with the as-prepared MnZn ferrite sol by ultrasonic treatment to prevent the agglomeration of nanoparticles.The powder mixture was dried at 100 • C for 1 h to get the gel.Then the gel was coated on the iron powders by mechanic stirring and auto-combustion.The effect of the content of MnZn ferrites (0.5-3 wt.%) and Nano-Fe 3 O 4 powders (0.5-3 wt.%) is discussed in this paper.The coated iron powders were compacted at 800 MPa in a die with 0.5 wt.% zinc stearate as the lubricant.This process resulted in a ring shapes composite with an outer diameter of 20 mm, an inner diameter of 12 mm, and a thickness of 3 mm.The schematic formation process is shown in Figure 1. Material Characterization The crystal structure analysis and morphology of the resulting product were carried out by X-ray powder diffraction (XRD) (DX-2007, Dandong Fang Yuan Co., Ltd, Dandong, China) with Cu diffraction at 30 kV and 30 mA at room temperature, and scanning electron microscopy (SEM, Nova NanoSEM 450, FEI, Hillsboro, OR, USA) equipped with an energy dispersive spectroscopy (EDS) (Ultra, EDAX, Mahwah, NJ, USA).The chemical state of coating was identified by X-ray photoelectron spectroscopy (XPS, 250XI, Thermo Fisher, Waltham, MA, USA).The densities of powder cores were determined by Archimedes principle with ethanol as the immersion fluid in the densimeter (ZMD-2, Shanghai Fangrui Instrument Co., Ltd, Shanghai, China).The electrical resistivities of the powder cores were measured by the four-point probe method.The magnetic properties of the samples, such as the saturation magnetization (Ms) and remnant magnetization (Mr), were measured by vibrating sample magnetometer (VSM, Lakeshore 7407, Columbus, OH, USA) at room temperature.The amplitude permeability and core loss of the samples were investigated by a soft magnetic AC measuring instrument (MATS-2010SA/500k, Linkioin, Loudi, China) at a magnetic excitation level of 50 mT and the frequency range from 5 kHz to 100 kHz.The D-C magnetic property was recorded using a soft magnetic DC measuring instrument (MATS-2010SD, Linkioin, Loudi, China) under the maximum applied magnetic field of 45,000 A/m. Characterization of the Soft Magnetic Composites The SEM images of iron and iron powders after the coating process are shown in Figure 2. Obviously, the surface of the coated composite (Figure 2b) shows rough morphologies with many fine particles compared with the smooth surface morphologies of the typical water-atomized powders (Figure 2a).It can be seen that the nanoparticles of Fe3O4 and MnZnFe2O4 are uniform and regular in size and shape (inset in Figure 2b).The coating of the iron powder surface by ferrites is indirectly shown in Figure 3, and the existence of iron, manganese, zinc, and oxygen clearly indicates the presence of Fe3O4-MnZnFe2O4 ferrite coating on the surface of the water-atomized iron powders. Material Characterization The crystal structure analysis and morphology of the resulting product were carried out by X-ray powder diffraction (XRD) (DX-2007, Dandong Fang Yuan Co., Ltd, Dandong, China) with Cu K α diffraction at 30 kV and 30 mA at room temperature, and scanning electron microscopy (SEM, Nova NanoSEM 450, FEI, Hillsboro, OR, USA) equipped with an energy dispersive spectroscopy (EDS) (Ultra, EDAX, Mahwah, NJ, USA).The chemical state of coating was identified by X-ray photoelectron spectroscopy (XPS, 250XI, Thermo Fisher, Waltham, MA, USA).The densities of powder cores were determined by Archimedes principle with ethanol as the immersion fluid in the densimeter (ZMD-2, Shanghai Fangrui Instrument Co., Ltd, Shanghai, China).The electrical resistivities of the powder cores were measured by the four-point probe method.The magnetic properties of the samples, such as the saturation magnetization (M s ) and remnant magnetization (M r ), were measured by vibrating sample magnetometer (VSM, Lakeshore 7407, Columbus, OH, USA) at room temperature.The amplitude permeability and core loss of the samples were investigated by a soft magnetic AC measuring instrument (MATS-2010SA/500k, Linkioin, Loudi, China) at a magnetic excitation level of 50 mT and the frequency range from 5 kHz to 100 kHz.The D-C magnetic property was recorded using a soft magnetic DC measuring instrument (MATS-2010SD, Linkioin, Loudi, China) under the maximum applied magnetic field of 45,000 A/m. Characterization of the Soft Magnetic Composites The SEM images of iron and iron powders after the coating process are shown in Figure 2. Obviously, the surface of the coated composite (Figure 2b) shows rough morphologies with many fine particles compared with the smooth surface morphologies of the typical water-atomized powders (Figure 2a).It can be seen that the nanoparticles of Fe 3 O 4 and MnZnFe 2 O 4 are uniform and regular in size and shape (inset in Figure 2b).The coating of the iron powder surface by ferrites is indirectly shown in Figure 3 Material Characterization The crystal structure analysis and morphology of the resulting product were carried out by X-ray powder diffraction (XRD) (DX-2007, Dandong Fang Yuan Co., Ltd, Dandong, China) with Cu diffraction at 30 kV and 30 mA at room temperature, and scanning electron microscopy (SEM, Nova NanoSEM 450, FEI, Hillsboro, OR, USA) equipped with an energy dispersive spectroscopy (EDS) (Ultra, EDAX, Mahwah, NJ, USA).The chemical state of coating was identified by X-ray photoelectron spectroscopy (XPS, 250XI, Thermo Fisher, Waltham, MA, USA).The densities of powder cores were determined by Archimedes principle with ethanol as the immersion fluid in the densimeter (ZMD-2, Shanghai Fangrui Instrument Co., Ltd, Shanghai, China).The electrical resistivities of the powder cores were measured by the four-point probe method.The magnetic properties of the samples, such as the saturation magnetization (Ms) and remnant magnetization (Mr), were measured by vibrating sample magnetometer (VSM, Lakeshore 7407, Columbus, OH, USA) at room temperature.The amplitude permeability and core loss of the samples were investigated by a soft magnetic AC measuring instrument (MATS-2010SA/500k, Linkioin, Loudi, China) at a magnetic excitation level of 50 mT and the frequency range from 5 kHz to 100 kHz.The D-C magnetic property was recorded using a soft magnetic DC measuring instrument (MATS-2010SD, Linkioin, Loudi, China) under the maximum applied magnetic field of 45,000 A/m. Characterization of the Soft Magnetic Composites The SEM images of iron and iron powders after the coating process are shown in Figure 2. Obviously, the surface of the coated composite (Figure 2b) shows rough morphologies with many fine particles compared with the smooth surface morphologies of the typical water-atomized powders (Figure 2a).It can be seen that the nanoparticles of Fe3O4 and MnZnFe2O4 are uniform and regular in size and shape (inset in Figure 2b).The coating of the iron powder surface by ferrites is indirectly shown in Figure 3 Furthermore, as shown in Figure 4, the XRD pattern revealed the existence of the Fe3O4-MnZnFe2O4 ferrite phase on the surface of iron powders.According to the JCPDS card, Bragg peaks of low intensity centered at 2θ of 30°, 35°, 43°, 53°, 57°and 63° can be ascribed to the (220), (331), (400), (422), (511), and (440) planes of the Fe3O4-MnZnFe2O4 ferrite coating.It is clear that there are no other extra Bragg peaks in the composite powders, except those corresponding to the diffraction peaks of iron and Fe3O4-MnZnFe2O4 ferrite.This observation can also be confirmed by the previous EDS mapping analysis (Figure 3).As the Fe3O4 and MnZnFe2O4 shows similar XRD profiles but different valences of the Fe ion [18], XPS characterization was carried out to determine the chemical state of Fe ion.In the survey spectrum (Figure 5a), the elements, Mn, Zn, and Fe can be observed.Two main peaks of Fe 2p1/2 and Fe 2p3/2 which located at 710.9 eV and 724.3 eV were observed in the high-resolution XPS spectrum of Fe 2p in Figure 5b.In addition, a satellite peak, located at 718.5 eV, indicates the presence of Fe 3+ [19].The Fe 3+ and Fe 2+ ions were assumed to generate overlapping Fe 2p spectra.In order to identify the detailed chemical state of the Fe ion, the peaks were deconvoluted into two peaks.Fe3O4 is alternatively expressed as FeO•Fe2O3, which is the chemical state of a mixture of Fe 3+ and Fe 2+ [19].As shown in Figure 5b, the Fe 2p peaks fitted well with the Fe 3+ and Fe 2+ peaks, indicating that the oxidation layer consists of Fe3O4.To further indicate the uniformity of the insulation coating on the iron powders, an SEM image of the cross section of the composite was taken, as shown in Figure 6.It can be observed from Figure 6 that the insulation parts were uniformly distributed around the iron particles.The boundary of pure iron particles is almost 2 μm thick.This result further verified that the iron powders are uniformly covered by the nanoparticles of ferrites.As the Fe 3 O 4 and MnZnFe 2 O 4 shows similar XRD profiles but different valences of the Fe ion [18], XPS characterization was carried out to determine the chemical state of Fe ion.In the survey spectrum (Figure 5a), the elements, Mn, Zn, and Fe can be observed.Two main peaks of Fe 2p 1/2 and Fe 2p 3/2 which located at 710.9 eV and 724.3 eV were observed in the high-resolution XPS spectrum of Fe 2p in Figure 5b.In addition, a satellite peak, located at 718.5 eV, indicates the presence of Fe 3+ [19].The Fe 3+ and Fe 2+ ions were assumed to generate overlapping Fe 2p spectra.In order to identify the detailed chemical state of the Fe ion, the peaks were deconvoluted into two peaks.Fe 3 O 4 is alternatively expressed as FeO•Fe 2 O 3 , which is the chemical state of a mixture of Fe 3+ and Fe 2+ [19].As shown in Figure 5b, the Fe 2p peaks fitted well with the Fe 3+ and Fe 2+ peaks, indicating that the oxidation layer consists of Fe 3 O 4 .As the Fe3O4 and MnZnFe2O4 shows similar XRD profiles but different valences of the Fe ion [18], XPS characterization was carried out to determine the chemical state of Fe ion.In the survey spectrum (Figure 5a), the elements, Mn, Zn, and Fe can be observed.Two main peaks of Fe 2p1/2 and Fe 2p3/2 which located at 710.9 eV and 724.3 eV were observed in the high-resolution XPS spectrum of Fe 2p in Figure 5b.In addition, a satellite peak, located at 718.5 eV, indicates the presence of Fe 3+ [19].The Fe 3+ and Fe 2+ ions were assumed to generate overlapping Fe 2p spectra.In order to identify the detailed chemical state of the Fe ion, the peaks were deconvoluted into two peaks.Fe3O4 is alternatively expressed as FeO•Fe2O3, which is the chemical state of a mixture of Fe 3+ and Fe 2+ [19].As shown in Figure 5b, the Fe 2p peaks fitted well with the Fe 3+ and Fe 2+ peaks, indicating that the oxidation layer consists of Fe3O4.To further indicate the uniformity of the insulation coating on the iron powders, an SEM image of the cross section of the composite was taken, as shown in Figure 6.It can be observed from Figure 6 that the insulation parts were uniformly distributed around the iron particles.The boundary of pure iron particles is almost 2 μm thick.This result further verified that the iron powders are uniformly covered by the nanoparticles of ferrites.To further indicate the uniformity of the insulation coating on the iron powders, an SEM image of the cross section of the composite was taken, as shown in Figure 6.It can be observed from Figure 6 that the insulation parts were uniformly distributed around the iron particles.The boundary of pure iron particles is almost 2 µm thick.This result further verified that the iron powders are uniformly covered by the nanoparticles of ferrites. Electrical and Magnetic Properties of the Composites Magnetic hysteresis loops for Fe, Fe/Fe3O4-MnZnFe2O4, Fe3O4, and MnZnFe2O4 powders are shown in Figure 7.The loops revealed the typical magnetic behavior of SMCs, at an external field of ~30,000 Oe.The magnetic parameters such as Ms, Mr, and the ratio of remnant magnetization to saturation magnetization (Mr/Ms) are clearly presented in Table 1.It is found that the saturation magnetization of Fe/Fe3O4-MnZnFe2O4 powders (215.6 A•m 2 /kg) is slightly lower than that of Fe powder (219.5 A•m 2 /kg) but higher than that of Fe3O4 powders (84.1 A•m 2 /kg) and MnZnFe2O4 powders (33.2 A•m 2 /kg).This result demonstrated that the ferrimagnetic insulation coating has a positive effect on the saturation magnetization of SMC.The Ms value of composite iron powders decreased as the percentage of magnetic Fe3O4-MnZnFe2O4 phase increased.When the percentage of Fe3O4-MnZnFe2O4 increased from 1 wt.% to 6 wt.%, the Ms decreased from 215 A•m 2 /kg to 207 A•m 2 /kg.As shown in the Figure 7 and Table 1, the Mr value has nearly the same trend as Ms and a similar reason can be used to explain this phenomenon.It can also be seen from Table 1 that Mr/Ms shows no change with the increasing percentage of the ferrimagnetism insulation coating.This means that the value of Mr/Ms has nothing to do with the content of the Fe3O4-MnZnFe2O4 phase.7 and Table 1, the M r value has nearly the same trend as M s and a similar reason can be used to explain this phenomenon.It can also be seen from Table 1 that M r /M s shows no change with the increasing percentage of the ferrimagnetism insulation coating.This means that the value of M r /M s has nothing to do with the content of the Fe 3 O 4 -MnZnFe 2 O 4 phase. The direct current performance and densities for the toroidal samples with different ferrite percentage at the maximum applied field of 45,000 A/m are shown in Table 2.The coercivity increased as the content of Fe 3 O 4 and MnZnFe 2 O 4 increased from 1 wt.% to 6 wt.%.Moreover, the density decreased from 7.56 g/cm 3 to 6.92 g/cm 3 as the content of Fe 3 O 4 and MnZnFe 2 O 4 increased from 0 wt.% to 6 wt.%.It is easy to understand that, with the increasing of ferrite, the low density of Fe 3 O 4 and MnZnFe 2 O 4 leads to a decrease in the overall density of the composite.The maximum relative permeability, magnetic induction, and the remanent magnetic induction (B r ) value also decrease with the increasing content of ferrite.This can be ascribed to the dilution effect of magnetic properties, caused by the ferrimagnetism insulation coating.All the samples with ferrite showed excellent DC properties, and the compact with 1 wt.% ferrite exhibited the best DC properties. Electrical and Magnetic Properties of the Composites Magnetic hysteresis loops for Fe, Fe/Fe3O4-MnZnFe2O4, Fe3O4, and MnZnFe2O4 powders are shown in Figure 7.The loops revealed the typical magnetic behavior of SMCs, at an external field of ~30,000 Oe.The magnetic parameters such as Ms, Mr, and the ratio of remnant magnetization to saturation magnetization (Mr/Ms) are clearly presented in Table 1.It is found that the saturation magnetization of Fe/Fe3O4-MnZnFe2O4 powders (215.6 A•m 2 /kg) is slightly lower than that of Fe powder (219.5 A•m 2 /kg) but higher than that of Fe3O4 powders (84.1 A•m 2 /kg) and MnZnFe2O4 powders (33.2 A•m 2 /kg).This result demonstrated that the ferrimagnetic insulation coating has a positive effect on the saturation magnetization of SMC.The Ms value of composite iron powders decreased as the percentage of magnetic Fe3O4-MnZnFe2O4 phase increased.When the percentage of Fe3O4-MnZnFe2O4 increased from 1 wt.% to 6 wt.%, the Ms decreased from 215 A•m 2 /kg to 207 A•m 2 /kg.As shown in the Figure 7 and Table 1, the Mr value has nearly the same trend as Ms and a similar reason can be used to explain this phenomenon.It can also be seen from Table 1 that Mr/Ms shows no change with the increasing percentage of the ferrimagnetism insulation coating.This means that the value of Mr/Ms has nothing to do with the content of the Fe3O4-MnZnFe2O4 phase.The real part and imaginary part of permeability of the Fe/Fe 3 O 4 -MnZnFe 2 O 4 toroidal core at the frequency range of 1 kHz to 100 kHz are shown in Figure 8.The effect of Fe 3 O 4 -MnZnFe 2 O 4 content can be clearly observed from the curves.The real part of the compacted Fe/Fe 3 O 4 -MnZnFe 2 O 4 showed no obvious changes in the whole frequency range, while the amplitude permeability of the composite cores without ferrite decreases rapidly, which is attributed to the increase in the eddy current.The iron powder core has low electrical resistivity and resulted in a large eddy current.The eddy current increase with increasing frequency means that more reverse magnetic field is produced inside the magnet which could weaken the magnetic induction [8,20].This conclusion is in accordance with the result of magnetic induction presented in Table 2.Moreover, the real part of permeability that exceeds 100 has been achieved, which may be due to the high density (>7.0 g/cm 3 ) of compact, as shown in Table 2, and the relatively thin coating.With the increasing content of ferrite from 1 wt.% to 6 wt.%, the real permeability of compacts declined to nearly 90.This can be attributed to the increasing of the thickness of the Core Loss of the Composites and Loss Seperation The total core loss of the composite with different ferrite contents dependence on frequency at 50 mT is shown in Figure 10.It can be observed that the total core loss increased significantly as the frequency increased from 5 kHz to 100 kHz.With the increasing of percentage of ferrite, the core loss decreased obviously.The composite with a 4 wt.% addition of Fe3O4-MnZnFe2O4 ferrite showed the smallest core loss compared with the raw Fe compact under the same test conditions.There exists a slight increase in the core loss of the composite with 6 wt.% ferrite (Figure 10a), which is due to the increase in hysteresis loss as shown in Figure 10b.It is well-known that the total core loss can be separated into three main parts: Hysteresis loss ( h P ), eddy current loss ( e P ) and residual loss ( r P ). The residual loss can be ignored in the metallic materials [1].Thus, the total core loss can be expressed by the following equation: . Where is the frequency, is the hysteresis loss coefficient and is the eddy current loss coefficient. Core Loss of the Composites and Loss Seperation The total core loss of the composite with different ferrite contents dependence on frequency at 50 mT is shown in Figure 10.It can be observed that the total core loss increased significantly as the frequency increased from 5 kHz to 100 kHz.With the increasing of percentage of ferrite, the core loss decreased obviously.The composite with a 4 wt.% addition of Fe3O4-MnZnFe2O4 ferrite showed the smallest core loss compared with the raw Fe compact under the same test conditions.There exists a slight increase in the core loss of the composite with 6 wt.% ferrite (Figure 10a), which is due to the increase in hysteresis loss as shown in Figure 10b.It is well-known that the total core loss can be separated into three main parts: Hysteresis loss ( h P ), eddy current loss ( e P ) and residual loss ( r P ). The residual loss can be ignored in the metallic materials [1].Thus, the total core loss can be expressed by the following equation: . Where is the frequency, is the hysteresis loss coefficient and is the eddy current loss coefficient. Core Loss of the Composites and Loss Seperation The total core loss of the composite with different ferrite contents dependence on frequency at 50 mT is shown in Figure 10.It can be observed that the total core loss increased significantly as the frequency increased from 5 kHz to 100 kHz.With the increasing of percentage of ferrite, the core loss decreased obviously.The composite with a 4 wt.% addition of Fe 3 O 4 -MnZnFe 2 O 4 ferrite showed the smallest core loss compared with the raw Fe compact under the same test conditions.There exists a slight increase in the core loss of the composite with 6 wt.% ferrite (Figure 10a), which is due to the increase in hysteresis loss as shown in Figure 10b.It is well-known that the total core loss can be separated into three main parts: Hysteresis loss (P h ), eddy current loss (P e ) and residual loss (P r ).The residual loss can be ignored in the metallic materials [1].Thus, the total core loss can be expressed by the following equation: [1].Where f is the frequency, K h is the hysteresis loss coefficient and K e is the eddy current loss coefficient.Figure 10b,c show the value of the hysteresis loss and eddy current loss of the compacted Fe/Fe3O4-MnZnFe2O4 composite at different frequency at 50 mT with different ferrite content, which were calculated from the equation above and the obtained core loss data.As can be seen in Figure 10b, the hysteresis loss of the composite increased with the increasing content of Fe3O4-MnZnFe2O4 ferrite.Moreover, the value is larger than the hysteresis loss value of the raw Fe compact.The hysteresis loss mainly depends on the particle size, residual stress, and volume fraction of inclusions (pores, impurities, defects).The ferrite increases the distributed air gap in the composite, so that domain wall movement is difficult and increased the hysteresis loss consequently [22].However, the eddy current loss of the Fe3O4-MnZnFe2O4 ferrite coated Fe cores was notably smaller than that of the raw Fe cores.This could be ascribed to the insulation effect caused by the Fe3O4-MnZnFe2O4 ferrite, which effectively reduced the eddy current loss. Conclusions In this work, an iron-based SMC with Fe3O4-MnZnFe2O4 insulation coating has been successfully prepared by powder metallurgy method.The co-existence of Fe3O4 and MnZnFe2O4 contributes to the preferable distribution of nano-sized insulation powders and operated as an effective insulation layer.The composite with 4wt% Fe3O4-MnZnFe2O4 ferrite exhibited excellent soft magnetic properties in general, with magnetic induction of 1.74 T, the lowest core loss, which was 178.7 W/kg (measured at 100 kHz and 50 mT) and good frequency-dependent characteristic at a wide range of frequencies.In all, the efficient preparation process and enhanced soft magnetic composite properties of the Fe/Fe3O4-MnZnFe2O4 composites promise a potential application and mass production in magnetic devices.As can be seen in Figure 10b, the hysteresis loss of the composite increased with the increasing content of Fe 3 O 4 -MnZnFe 2 O 4 ferrite.Moreover, the value is larger than the hysteresis loss value of the raw Fe compact.The hysteresis loss mainly depends on the particle size, residual stress, and volume fraction of inclusions (pores, impurities, defects).The ferrite increases the distributed air gap in the composite, so that domain wall movement is difficult and increased the hysteresis loss consequently [21].However, the eddy current loss of the Fe 3 O 4 -MnZnFe 2 O 4 ferrite coated Fe cores was notably smaller than that of the raw Fe cores.This could be ascribed to the insulation effect caused by the Fe 3 O 4 -MnZnFe 2 O 4 ferrite, which effectively reduced the eddy current loss. Conclusions In this work, an iron-based SMC with Fe 3 O 4 -MnZnFe 2 O 4 insulation coating has been successfully prepared by powder metallurgy method.The co-existence of Fe 3 O 4 and MnZnFe 2 O 4 contributes to the preferable distribution of nano-sized insulation powders and operated as an effective insulation layer.The composite with 4wt% Fe 3 O 4 -MnZnFe 2 O 4 ferrite exhibited excellent soft magnetic properties in general, with magnetic induction of 1.74 T, the lowest core loss, which was 178.7 W/kg (measured at 100 kHz and 50 mT) and good frequency-dependent characteristic at a wide range of frequencies.In all, the efficient preparation process and enhanced soft magnetic composite properties of the Fe/Fe 3 O 4 -MnZnFe 2 O 4 composites promise a potential application and mass production in magnetic devices. Figure 2 . Figure 2. (a) SEM image of iron powders; (b) SEM image of coated iron powders. Figure 2 . Figure 2. (a) SEM image of iron powders; (b) SEM image of coated iron powders. Figure 2 . Figure 2. (a) SEM image of iron powders; (b) SEM image of coated iron powders. Furthermore, as shown in Figure 4, the XRD pattern revealed the existence of the Fe 3 O 4 -MnZnFe 2 O 4 ferrite phase on the surface of iron powders.According to the JCPDS card, Bragg peaks of low intensity centered at 2θ of 30 • , 35 • , 43 • , 53 • , 57 • and 63 • can be ascribed to the (220), (331), (400), (422), (511), and (440) planes of the Fe 3 O 4 -MnZnFe 2 O 4 ferrite coating.It is clear that there are no other extra Bragg peaks in the composite powders, except those corresponding to the diffraction peaks of iron and Fe 3 O 4 -MnZnFe 2 O 4 ferrite.This observation can also be confirmed by the previous EDS mapping analysis (Figure 3). 3. 2 . Electrical and Magnetic Properties of the Composites Magnetic hysteresis loops for Fe, Fe/Fe 3 O 4 -MnZnFe 2 O 4 , Fe 3 O 4 , and MnZnFe 2 O 4 powders are shown in Figure 7.The loops revealed the typical magnetic behavior of SMCs, at an external field of ~30,000 Oe.The magnetic parameters such as M s , M r , and the ratio of remnant magnetization to saturation magnetization (M r /M s ) are clearly presented in Table 1.It is found that the saturation magnetization of Fe/Fe 3 O 4 -MnZnFe 2 O 4 powders (215.6 A•m 2 /kg) is slightly lower than that of Fe powder (219.5 A•m 2 /kg) but higher than that of Fe 3 O 4 powders (84.1 A•m 2 /kg) and MnZnFe 2 O 4 powders (33.2 A•m 2 /kg).This result demonstrated that the ferrimagnetic insulation coating has a positive effect on the saturation magnetization of SMC.The M s value of composite iron powders decreased as the percentage of magnetic Fe 3 O 4 -MnZnFe 2 O 4 phase increased.When the percentage of Fe 3 O 4 -MnZnFe 2 O 4 increased from 1 wt.% to 6 wt.%, the M s decreased from 215 A•m 2 /kg to 207 A•m 2 /kg.As shown in the Figure Fe 3 O 4 - MnZnFe 2 O 4 ferrite.The imaginary permeability value of the compacted Fe/Fe 3 O 4 -MnZnFe 2 O 4 decreased from 34.5 to 14.3 when the content of ferrite decreased from 6 wt.% to 1 wt.%.The imaginary part of the compacted Fe/Fe 3 O 4 -MnZnFe 2 O 4 exhibited a relatively low value compared with the raw Fe compact, indicating that the magnetic loss is weakened for Fe/Fe 3 O 4 -MnZnFe 2 O 4 and can be ascribed to the high resistivity [20].Obviously, Fe 3 O 4 -MnZnFe 2 O 4 ferrite plays an important role in enhancing the electric resistivity of Fe powder as shown in Figure 9.With the increasing content of Fe 3 O 4 -MnZnFe 2 O 4 ferrite in the composite, the electric resistivity increased from 98.3 mΩ•m to 733.2 mΩ•m. Figure 9 . Figure 9. Resistivity of the composite compacts with different ferrite contents. Figure 9 . Figure 9. Resistivity of the composite compacts with different ferrite contents. Figure 9 . Figure 9. Resistivity of the composite compacts with different ferrite contents. Figure 10 . Figure 10.Core loss of the composite with different ferrite contents dependence on frequency at 0.05 T: (a) total core loss; (b) hysteresis core loss; (c) eddy current core loss. Author Contributions: Y.X., P.Y., and B.Y. designed the project; Y.X.performed the experiments, analyzed the data, and wrote the original draft; P.Y. and B.Y. reviewed the writing. Figure 10 . Figure 10.Core loss of the composite with different ferrite contents dependence on frequency at 0.05 T: (a) total core loss; (b) hysteresis core loss; (c) eddy current core loss. Figure Figure10b,cshow the value of the hysteresis loss and eddy current loss of the compacted Fe/Fe 3 O 4 -MnZnFe 2 O 4 composite at different frequency at 50 mT with different ferrite content, which were calculated from the equation above and the obtained core loss data.As can be seen in Figure10b, the hysteresis loss of the composite increased with the increasing content of Fe 3 O 4 -MnZnFe 2 O 4 ferrite.Moreover, the value is larger than the hysteresis loss value of the raw Fe compact.The hysteresis loss mainly depends on the particle size, residual stress, and volume fraction of inclusions (pores, impurities, defects).The ferrite increases the distributed air gap in the composite, so that domain wall movement is difficult and increased the hysteresis loss consequently[21].However, the eddy current loss of the Fe 3 O 4 -MnZnFe 2 O 4 ferrite coated Fe cores was notably smaller than that of the raw Fe cores.This could be ascribed to the insulation effect caused by the Fe 3 O 4 -MnZnFe 2 O 4 ferrite, which effectively reduced the eddy current loss. Table 2 . DC performance and density for composites with different ferrite percentage.
7,886.2
2018-09-06T00:00:00.000
[ "Materials Science" ]
Precise control of a strong X-Y coupling beam transportation for J-PARC muon g-2/EDM experiment A strategy for designing a dedicated beam injection and storage scheme for the J-PARC Muon g-2/EDM experiment is described. To accomplish a three-dimensional beam injection into the MRI-type compact storage system, transverse beam phase spaces (X-Y coupling) and a pulsed kicker system are key to controlling the vertical motion inside the storage volume. Moreover, dedicated beam phase control through the beam channel of the storage magnet’s yoke is crucial. We introduce a five-dimensional phase-space correlation in addition to strong X-Y coupling to control the stored vertical beam size to a level as small as one-third that achievable by X-Y coupling alone. Introduction A new measurement of the muon's anomalous magnetic moment a µ = (g − 2)/2 and its electric dipole moment (EDM) is being prepared at the J-PARC muon facility at the MLF, MUSE [1].These physical quantities are suitable probes for exploration beyond the standard model in elementary physics.These parameters were measured experimentally using a difference between the two angular frequencies of the spin procession frequency and orbital cyclotron frequency in a homogeneous magnetic field with no electric field, as Eq. 1. Here, two dipole moments of the muon are introduced: where c is the speed of light, q the unit charge, and m and g the mass and gyromagnetic ratio of the muon, respectively.If we assume a non-zero η, assuming that ⃗ β • ⃗ B = ⃗ β • ⃗ E = 0, the first term in Eq. 1, which expresses the muon magnetic moment, is orthogonal to the second, which includes the EDM-related term η, which is required to be extremely small from the standard model.The tilt angle of ω to ⃗ B is proportional to the magnitude of the EDM and is of the order of 1 mrad, considering the upper limit from the previous experiment E821 [2] (| ⃗ d µ | = 0.9 × 10 −19 e • cm).To achieve 100-times better sensitivity, the goal is 0.01 mrad. At J-PARC, a slow muon source and muon LINAC technology have been developed [1] to obtain a low emittance muon beam with a momentum of 300 MeV/c.Thereafter, muons are stored in a 3 T storage volume, and the diameter of the orbital cyclotron motion becomes only 0.66 m.This is the smallest storage ring for relativistic energy beams in the world.To meet this technical challenge, a new beam injection scheme called the three-dimensional spiral injection scheme is being developed [3].The beam enters the solenoid through a channel in the top iron yoke 110 cm above the storage volume (refer to Fig. 1) and its spiral motion is compressed by the Lorentz force owing to the static radial fringe field (B R ).A vertical kick (pulsed radial magnetic field) is applied to store the beam upon its arrival in the storage region.A small static weak focusing field in the fiducial volume maintains the beam in the storage region. Vertical Beam Motion Control by Pulsed Kicker The relation between a single track motion and the time structure of a pulsed kicker current is introduced.The details of the design concept of the kicker coil shape are discussed in [5]. Single Track Motion Controlled by Pulsed Kicker Figure 2 illustrates the vertical position of a single track as a function of time.The effective magnetic field along the trajectory is also presented.We employed a half-shine-shaped kicker pulse with T K = 120 ns duration time.The effective radial field along a single track is A correlation between the vertical positions and pitch angles along the trajectory is also shown.The role of the vertical pulsed kicker is to guide the trajectory such that both the vertical position and pitch angle are zero when the kick current returns to zero.There is freedom to design the trajectory during the kick period, depending on the kicker coil shape and kicker duration and current provided that the integrated B R L along the trajectory satisfies Eq. 5 [5]. Beam Motion with Expected Beam Phase Space Considering the expected beam phase space ( ϵ x,y ∼ 0.6 [mm-mrad]) and momentum dispersion ∆p/p ∼ 5 × 10 −4 from the upstream beam line, we can see how the trajectories of the particles differ (Fig. 3).The upstream beam transport line [6] is dedicated to controlling the X-Y coupling in the beam frame at the injection point, as shown in Fig. 4. X-Y coupling is a strong tool for controlling the stored beam distribution to z<10 cm.However, there is no clear hint to distinguish black and red subgroups.We must determine how to control the beam phase space to achieve |z|< 3 cm. As a trial, we attempted a stronger kicker case, as shown in Fig. 5, to confirm how well the beam distribution was controlled after the kick.Without changing the beam conditions, including the X-Y coupling, or the kicker coil shape, we shortened T K (120 → 85 ns) and increased I 0 (0.9 → 2 kA). To satisfy the experimental requirements, a smaller vertical beam distribution |z| after the kick is favored.The trajectories in the red lines correspond to |z|<3cm, indicating ideal beam injection and storage.By contrast, the black lines are stored in a weak focusing magnetic field The left-hand plot in Fig. 6 shows the time slice of the vertical phase space at the end of the kick for the moderate-kicker case.The middle and right-hand plots in Fig. 6 display the integrated B R L distributions.These plots indicate that a stronger kick can control B R L better.This is consistent with the stronger kink shape in Fig. 5 than in Fig. 4. However, a high voltage of V >80 kV is required for the kicker coils, which may cause severe technical difficulties in the kicker conductor design.Therefore, the strong-kicker case shown in Fig. 5 is not a realistic solution.To maintain V <30 kV, we need to consider how the beam phase space should be controlled from Fig. 3 trajectories. Beam Phase Space Study We investigated how a smaller |z| distribution can be realized while maintaining a high voltage at the kicker coil V <30 kV. One issue to be considered is the nonlinear magnetic field effect through the beam injection channel in the yoke.The left-hand picture in Fig. 7 shows a magnified view of the beam injection channel from above, through the yoke of the storage magnet (OPERA-3D model).The plot on the right shows the magnetic flux distribution along the beam trajectories.These nonlinear magnetic field components, particularly at the channel exit, may affect the beam phase space and cause unclear differences between the red and black distributions.In addition to X-Y coupling, we studied the beam phase space using three parameters: timesliced |r|, θ, and vertical distributions z g that are from six components of each trajectory in the global coordinate system.⃗ r = (x g , y g , z g ), ⃗ p = (p x , p y , p z ) (6) The upper plots in Fig. 8 show the correlations among these three parameters at different positions along the beam injection trajectory.As introduced previously, the red and black distributions are separated at z g ∼ 0.95 m (inside the yoke of the storage magnet); however, not at z g ∼ 1.40 m (entrance point of the channel in the yoke of the storage magnet).It seems that the correlation of time-sliced |r|, θ, and vertical distributions z g is not sufficient to distinguish between the red and black subgroups. Correlation Finding with Five Phase-Space Parameters through the Channel Each beam trajectory has six phase-space parameters, but it is fair to consider five independent, because the beam momentum can be considered fixed. We introduce two other angles: We define an n × 5 matrix M red as where n is the number of trajectories in the red subgroup and r, θ, zg , ψ, and φ denote the mean values of the red subgroup in Fig. 8.We apply singular value decomposition to M red , and obtain the eigenvector of the smallest eigenvalue: ⃗ q = (q r , q θ , q z , q ψ , q ϕ ) (9) We define an N × 5 matrix M tot of all trajectories, and estimate vector ⃗ d = M tot ⃗ q.Here, N is the sum of the black and red subgroup trajectories, and total N components of ⃗ d are residues.We find that ⃗ d indicates the difference between the red and black subgroups, as shown in the lower histograms in Fig. 8.The eigenvector ⃗ q at different time slices indicates a correlation of the five-dimensional phase space. Conclusion X-Y coupling can control the stored beam distribution to z<10 cm.In addition to strong X-Y coupling, a five-parameter phase-space correlation should be considered to control the precise vertical beam motion in the storage volume.Design work for a magnetic shield tube at the injection channel, which controls the distribution of ⃗ d to be narrower, is ongoing.An additional multipole magnet at the injection point is also under consideration. Figure 2 . Figure 2. Left: Vertical motion of a single track as a function of time.Right: Correlation of vertical position and pitch angle. Figure 4 .Figure 5 . Figure 4. X-Y coupling in the beam frame at the injection point, controlled in the upstream beam transport line [6]. Figure 6 . Figure 6.Left: Time-sliced vertical phase space at the end of the moderate kick and B R L distribution.Red corresponds to |z|< 3 cm.Closed ellipses are from weak focusing field [5].Middle and right: Integrated B R L at the end of the kick. Figure 7 . Figure 7. Left: Topside view of channel in the yoke and trajectories (OPERA-3D).Right: Magnetic field along the trajectories.Nonlinear magnetic field effects need to be considered. Figure 8 . Figure 8. Upper: |r| − θ − z g correlations with different time slices.It is hard to distinguish between red and black groups.Lower: Residue vector ⃗ d components indicate a hint to control beam as |z|< 3 cm.
2,336.6
2024-01-01T00:00:00.000
[ "Physics", "Engineering" ]
Global regularity for systems with $p$-structure depending on the symmetric gradient In this paper we study on smooth bounded domains the global regularity (up to the boundary) for weak solutions to systems having $p$-structure depending only on the symmetric part of the gradient. Introduction In this paper we study regularity of weak solutions to the boundary value problem − div S(Du) = f in Ω, where Du := 1 2 (∇u + ∇u ⊤ ) denotes the symmetric part of the gradient ∇u and where 1 Ω ⊂ R 3 is a bounded domain with a C 2,1 boundary ∂Ω. Our interest in this system comes from the p-Stokes system − div S(Du) + ∇π = f in Ω, div u = 0 in Ω, (1. 2) In both problems the typical example for S we have in mind is where p ∈ (1, 2], δ ≥ 0, and µ > 0. In previous investigations of (1.2) only suboptimal results for the regularity up to the boundary have been proved. Here we mean suboptimal in the sense that the results are weaker than the results known for p-Laplacian systems, cf. [1,13,14]. Clearly, the system (1.1) is obtained from (1.2) by dropping the divergence constraint and the resulting pressure gradient. Thus the system (1.1) lies in between system (1.2) and p-Laplacian systems, which depend on the full gradient ∇u. We would like to stress that the system (1.1) is of own independent interest, since it is studied within plasticity theory, when formulated in the framework of deformation theory (cf. [11,24]). In this context the unknown is the displacement vector field u = (u 1 , u 2 , u 3 ) ⊤ , while the external body force f = (f 1 , f 2 , f 3 ) ⊤ is given. The stress tensor S, which is the tensor of small elasto-plastic deformations, 1 We restrict ourselves to the problem in three space dimensions, even if results can be easily transferred to the problem in R d for all d ≥ 2. We study global regularity properties of weak solutions to (1.1) in sufficiently smooth and bounded domains Ω; we obtain for all p ∈ (1, 2] the optimal result, namely that F(Du) belongs to W 1,2 (Ω), where the nonlinear tensor-valued function F is defined in (2.8). This result has been proved near a flat boundary in [24] and is the same result as for p-Laplacian systems (cf. [1,13,14]). The situation is quite different for (1.2). There the optimal result, i.e. F(Du) ∈ W 1,2 (Ω), is only known for (i) two-dimensional bounded domains (cf. [16] where even the p-Navier-Stokes system is treated), (ii) the space-periodic problem in R d , d ≥ 2, which follows immediately from interior estimates, i.e. F(Du) ∈ W 1,2 loc (Ω), which are known in all dimensions and the periodicity of the solution, (iii) if the no-slip boundary condition is replaced by perfect slip boundary conditions (cf. [17]), and (iv) in the case of small f (cf. [6]). We also observe that the above results for the p-Stokes system (apart those in the space periodic setting) require the stress tensor to be non-degenerate, that is δ > 0. In the case of homogeneous Dirichlet boundary conditions and three-and higher-dimensional bounded, sufficiently smooth domains only suboptimal results are known. To our knowledge the state of the art for general data is that F(Du) ∈ W 1,2 loc (Ω), tangential derivatives of F(Du) near the boundary belong to L 2 , while the normal derivative of F(Du) near the boundary belongs to some L q , where q = q(p) < 2 (cf. [2,4] and the discussion therein). We would also like to mention a result for another system between (1.2) and p-Laplacian system, namely if (1.2) is considered with S depending on the full velocity gradient ∇u. In this case it is proved in [7] that u ∈ W 2,r (R 3 ) ∩ W 1,p 0 (R 3 ) for some r > 3, provided p < 2 is very close to 2. In the present paper we extend to the general case of bounded sufficiently smooth domains and to possibly degenerate stress tensors, that is the case δ = 0, the optimal regularity result for (1.1) of Seregin and Shilkin [24] in the case of a flat boundary. The precise result we prove is the following: Theorem 1.3. Let the tensor field S in (1.1) have (p, δ)-structure for some p ∈ (1, 2], and δ ∈ [0, ∞), and let F be the associated tensor field to S. Let Ω ⊂ R 3 be a bounded domain with C 2,1 boundary and let f ∈ L p ′ (Ω). Then, the unique weak solution u ∈ W 1,p 0 (Ω) of the problem (1.1) satisfies where c denotes a positive function which is non-decreasing in f p ′ and δ, and which depends on the domain through its measure |Ω| and the C 2,1 -norms of the local description of ∂Ω. In particular, the above estimate implies that u ∈ W 2, 3p p+1 (Ω). Preliminaries and main results In this section we introduce the notation we will use, state the precise assumptions on the extra stress tensor S, and formulate the main results of the paper. 2.1. Notation. We use c, C to denote generic constants, which may change from line to line, but are independent of the crucial quantities. Moreover, we write f ∼ g if and only if there exists constants c, C > 0 such that c f ≤ g ≤ C f . In some cases we need to specify the dependence on certain parameters, and consequently we denote by c( . ) a positive function which is non-decreasing with respect to all its arguments. We use standard Lebesgue spaces (L p (Ω), . p ) and Sobolev spaces (W k,p (Ω), . k,p ), where Ω ⊂ R 3 , is a sufficiently smooth bounded domain. The space W 1,p 0 (Ω) is the closure of the compactly supported, smooth functions C ∞ 0 (Ω) in W 1,p (Ω). Thanks to the Poincaré inequality we equip W 1,p 0 (Ω) with the gradient norm ∇ · p . When dealing with functions defined only on some open subset G ⊂ Ω, we denote the norm in L p (G) by . p,G . As usual we use the symbol ⇀ to denote weak convergence, and → to denote strong convergence. The symbol spt f denotes the support of the function f . We do not distinguish between scalar, vector-valued or tensor-valued function spaces. However, we denote vectors by boldface lower-case letter as e.g. u and tensors by boldface upper case letters as e.g. S. For vectors u, v ∈ R 3 we denote u Greek lower-case letters take only values 1, 2, while Latin lower-case ones take the values 1, 2, 3. We use the summation convention over repeated indices only for Greek lower-case letters, but not for Latin lower-case ones. are satisfied for all P, Q ∈ R 3×3 with P sym = 0 and all i, j, k, l = 1, 2, 3. The constants κ 0 , κ 1 , and p are called the characteristics of S. Remark 2.7. (i) Assume that S has (p, δ)-structure for some δ ∈ [0, δ 0 ]. Then, if not otherwise stated, the constants in the estimates depend only on the characteristics of S and on δ 0 , but are independent of δ. (ii) An important example of a tensor field S having (p, δ)-structure is given by S(P) = ϕ ′ (|P sym |)|P sym | −1 P sym . In this case the characteristics of S, namely κ 0 and κ 1 , depend only on p and are independent of δ ≥ 0. Proposition 2.9. Let S have (p, δ)-structure, and let F be defined in (2.8). Then The constants depend only on the characteristics of S. For a detailed discussion of the properties of S and F and their relation to Orlicz spaces and N-functions we refer the reader to [23,3]. Since in the following we shall insert into S and F only symmetric tensors, we can drop in the above formulas the superscript " sym " and restrict the admitted tensors to symmetric ones. We recall that the following equivalence, which is proved in [3, Lemma 3.8], valid for all smooth enough symmetric tensor fields Q ∈ R 3×3 sym . The proof of this equivalence is based on Proposition 2.9. This Proposition and the theory of divided differences also imply (cf. [4, (2.26 for all smooth enough symmetric tensor fields Q ∈ R 3×3 sym . A crucial observation in [24] is that the quantities in (2.11) are also equivalent to several further quantities. To formulate this precisely we introduce for i = 1, 2, 3 and for sufficiently smooth symmetric tensor fields Q the quantity (2. 13) Recall, that in the definition of P i (Q) there is no summation convention over the repeated Latin lower-case index i in ∂ i S(Q)·∂ i Q. Note that if S has (p, δ)-structure, then P i (v) ≥ 0, for i = 1, 2, 3. There hold the following important equivalences, first proved in [24]: Proposition 2.14. Assume that S has (p, δ)-structure. Then the following equivalences are valid, for all smooth enough symmetric tensor fields Q and all i = 1, 2, 3 with constants only depending on the characteristics of S. Proof. The assertions are proved in [24] using a different notation. For the convenience of the reader we sketch the proof here. The equivalences in (2.15) follow from (2.11), (2.13) and the fact that S has (p, δ)-structure. Furthermore, we have, using (2.15), which proves one inequality of (2.16). The other follows from where we used (2.6) and (2.15). Existence of weak solutions. In this section we define weak solutions of (1.1), recall the main results of existence and uniqueness and discuss a perturbed problem, which is used to justify the computations that follow. From now on we restrict ourselves to the case p ≤ 2. We have the following very standard result: Proof. The assertions follow directly from the assumptions, by using the theory of monotone operators. In order to justify some of the following computations we find it convenient to consider a perturbed problem, where we add to the tensor field S with (p, δ)structure a linear perturbation. Using again the theory of monotone operators one can easily prove: 19. Let the tensor field S in (1.1) have (p, δ)-structure for some p ∈ (1, 2], and δ ∈ [0, ∞) and let f ∈ L p ′ (Ω) be given. Then, there exists a unique weak solution The solution u ε satisfies the estimate Remark 2.22. In fact, one could already prove more at this point. Namely, that for ε → 0, the unique solution u ε converges to the unique weak solution u of the unperturbed problem (1.1). Let us sketch the argument only, since later we get the same result with different easier arguments. From (2.21) and the properties of S follows that Passing to the limit in the weak formulation of the perturbed problem, we get One can not show directly that lim ε→0 Ω εDu ε · (Du ε − Du) dx = 0, since Du belongs to L p (Ω) only. Instead one uses the Lipschitz truncation method (cf. [10,22]). Denoting by v ε,j the Lipschitz truncation of ξ(u ε − u), where ξ ∈ C ∞ 0 (Ω) is a localization, one can show, using the ideas from [10,22], that which implies Du ε → Du almost everywhere in Ω. Consequently, we have χ = S(Du), since weak and a.e. limits coincide. Description and properties of the boundary. We assume that the boundary ∂Ω is of class C 2,1 , that is for each point P ∈ ∂Ω there are local coordinates such that in these coordinates we have P = 0 and ∂Ω is locally described by a C 2,1 -function, i.e., there exist R P , R ′ P ∈ (0, ∞), r P ∈ (0, 1) and a C 2,1 -function open ball with center 0 and radius r > 0. Note that r P can be made arbitrarily small if we make R P small enough. In the sequel we will also use, for 0 < λ < 1, the following scaled open sets, λ Ω P ⊂ Ω P defined as follows To localize near to ∂Ω ∩ ∂Ω P , for P ∈ ∂Ω, we fix smooth functions ξ P : is the indicator function of the measurable set A. For the remaining interior estimate we localize by a smooth function 0 ≤ ξ 00 ≤ 1 with spt ξ 00 ⊂ Ω 00 , where Ω 00 ⊂ Ω is an open set such that dist(∂Ω 00 , ∂Ω) > 0. Since the boundary ∂Ω is compact, we can use an appropriate finite sub-covering which, together with the interior estimate, yields the global estimate. Let us introduce the tangential derivatives near the boundary. To simplify the notation we fix P ∈ ∂Ω, h ∈ (0, RP 16 ), and simply write ξ := ξ P , a := a P . We use the standard notation x = (x ′ , x 3 ) ⊤ and denote by e i , i = 1, 2, 3 the canonical orthonormal basis in R 3 . In the following lower-case Greek letters take values 1, 2. For a function g with spt g ⊂ spt ξ we define for α = 1, 2 and if ∆ + g := g τ − g, we define tangential divided differences by d + g := h −1 ∆ + g. It holds that, if g ∈ W 1,1 (Ω), then we have for α = 1, 2 almost everywhere in spt ξ, (cf. [18,Sec. 3]). Conversely uniform L q -bounds for d + g imply that ∂ τ g belongs to L q (spt ξ). For simplicity we denote ∇a := (∂ 1 a, ∂ 2 a, 0) ⊤ . The following variant of integration per parts will be often used. Consequently, Ω f d + g dx = Ω (d − f )g dx. Moreover, if in addition f and g are smooth enough and at least one vanishes on ∂Ω, then Proof of the main result In the proof of the main result we use finite differences to show estimates in the interior and in tangential directions near the boundary and calculations involving directly derivatives in "normal" directions near the boundary. In order to justify that all occurring quantities are well posed, we perform the estimate for the approximate system (2.20). The first intermediate step is the following result for the approximate problem. (3.2) Here ξ 0 is a cut-off function with support in the interior of Ω, while for arbitrary P ∈ ∂Ω the function ξ P is a cut-off function with support near to the boundary ∂Ω, as defined in Sec. 2.4. The tangential derivative ∂ τ is defined locally in Ω P by (2.24). Moreover, there exists a constant C 1 > 0 such that provided that in the local description of the boundary there holds r P < C 1 in (b3). The two estimates (3.2) are uniform with respect to ε and could be also proved directly for the problem (1.1). However, the third estimate (3.3) depends on ε but is needed to justify all subsequent steps, which will give the proof of an estimate uniformly in ε, by using a different technique. [4] due to the missing divergence constraint. In fact it adapts techniques known from nonlinear elliptic systems. For the convenience of the reader we recall the main steps here. Fix P ∈ ∂Ω and use in Ω where ξ := ξ P , a := a P , and h ∈ (0, RP 16 ), as a test function in the weak formulation of (2.20). This yields From the assumption on S, Proposition 2.9, and [4, Lemma 3.11] we have the following estimate This proves the second estimate in (3.2) by standard arguments. The first estimate in (3.2) is proved in the same way with many simplifications, since we work in the interior where the method works for all directions. This estimate implies that u ε ∈ W 2,2 loc (Ω) and that the system (2.20) is well-defined point-wise a.e. in Ω. To estimate the derivatives in the x 3 direction we use equation (2.20) and it is at this point that we have changes with respect to the results in [4]. In fact, as usual in elliptic problems, we have to recover the partial derivatives with respect to x 3 by using the information on the tangential ones. In this problem the main difficulty is that the leading order term is nonlinear and depends on the symmetric part of the gradient. Thus, we have to exploit the properties of (p, δ)-structure of the tensor S (cf. Definition 2.5). Denoting, for 3 i = 1, 2, 3, Contrary to the corresponding equality [4, equation (3.49)], here we use directly all the equations in (1.1), and not only the first two. Now we multiply these equations not by ∂ 3 D i3 u ε as expected, but by ∂ 3 D i3 u ε , where D αβ u ε = 0, for α, β = 1, 2, D α3 u ε = D 3α u ε = 2D α3 u ε , for α = 1, 2, D 33 u ε = D 33 u ε . Summing over i = 1, 2, 3 we get, by using the symmetries in Remark 2.7 (iii), that in Ω . By straightforward manipulations (cf. [4, Sections 3.2 and 4.2]) we can estimate the right-hand side as follows Note that we can deduce from b information about b i := ∂ 2 33 u i ε , i = 1, 2, 3, because |b| ≥ 2| b| − |∂ τ ∇u ε | − ∇a ∞ |∇ 2 u ε | holds a.e. in Ω P . This and the last last two inequalities imply a.e. in Ω P Adding on both sides, for α = 1, 2 and i, k = 1, 2, 3 the term (ε + ϕ ′′ (|Du ε |)) |∂ α ∂ i u k ε | , and using on the right-hand side the definition of the tangential derivative (cf. (2.24)), we finally arrive at which is valid a.e. in Ω P . Note that the constant c only depends on the characteristics of S. Next, we can choose the open sets Ω P in such a way that ∇a P (x) ∞,ΩP is small enough, so that we can absorb the last term from the right hand side, which yields where again the constant c only depends on the characteristics of S. By neglecting the second term on the left-hand side (which is non-negative), raising the remaining inequality to the power 2, and using that S has (p, δ)-structure for p < 2 we obtain The already proven results on tangential derivatives and Korn's inequality imply that the last integral from right-hand side is finite. Thus, the properties of the covering imply the last estimate in (3.2). 3.1. Improved estimates for normal derivatives. The proof of (3.3) used the system (2.20) and resulted in an estimate that is not uniform with respect to ε. In this section, by following the ideas in [24], we proceed differently and estimate P 3 in terms of quantities occurring in (3.2). The main technical step of the paper is the proof of the following result: Proposition 3.5. Let the same hypotheses as in Theorem 1.3 be satisfied with δ > 0 and let the local description a P of the boundary and the localization function ξ P satisfy (b1)-(b3) and (ℓ1) (cf. Section 2.4). Then, there exist a constant C 2 > 0 such that the weak solution u ε ∈ W 1,2 0 (Ω) of the approximate problem (2.20) satisfies for every P ∈ ∂Ω Proof. Let us fix an arbitrary point P ∈ ∂Ω and a local description a = a P of the boundary and the localization function ξ = ξ P satisfying (b1)-(b3) and (ℓ1). In the following we denote by C constants that depend only on the characteristics of S. First we observe that, by the results of Proposition 2.14 there exists a constant C 0 , depending only on the characteristics of S, such that a.e. in Ω. Thus, we get, using also the symmetry of Du ε and S, To estimate I 2 we multiply and divide by the quantity ϕ ′′ (|Du ε |) = 0, use Young's inequality and Proposition 2.14. This yields that for all λ > 0 there exists c λ > 0 such that Here and in the following we denote by c λ constants that may depend on the characteristics of S and on λ −1 , while C denotes constants that may depend on the characteristics of S only. To treat the third integral I 3 we proceed as follows: We use the well-known algebraic identity, valid for smooth enough vectors v and i, j, k = 1, 2, 3, 6) and the equations (2.20) point-wise, which can be written for j = 1, 2, 3 as, This is possible due to Proposition 3.1. Hence, we obtain The right-hand side can be estimated similarly as I 2 . This yields that for all λ > 0 there exists c λ > 0 such that estimated by Observe that we used p ≤ 2 to estimate the term involving f . To estimate I 1 we employ the algebraic identity (3.6) to split the integral as follows The first term is estimated similarly as I 2 , yielding for all λ > 0 To estimate B we observe that by the definition of the tangential derivative we have and consequently the term B can be split into the following three terms: We estimate B 2 as follows The term B 3 is estimated similarly as I 2 , yielding for all λ > 0 Concerning the term B 1 , we would like to perform some integration by parts, which is one of the crucial observations we are adapting from [24]. Neglecting the localization ξ in B 1 we would like to use that This formula can be justified by using an appropriate approximation, that exists for u ε ∈ W 1,2 0 (Ω) ∩ W 2,2 (Ω) since ∂ τ u ε = 0 on ∂Ω. More precisely, to treat the term B 1 we use that the solution u ε of (2.20) belongs to W 1,2 0 (Ω) ∩ W 2,2 (Ω). Thus, ∂ τ u ε|Ω P = 0 on ∂Ω P ∩ ∂Ω, hence ξ P ∂ τ (u 3 ε ) = 0 on ∂Ω. This implies that we can find a sequence (S n , U n ) ∈ C ∞ (Ω) × C ∞ 0 (Ω) such that (S n , U n ) → (S ε , ∂ τ u ε ) in W 1,2 (Ω) × W 1,2 0 (Ω) and perform calculations with (S n , U n ), showing then that all formulas of integration by parts are valid. Passage to the limit as n → +∞ is done only in the last step. For simplicity we drop the details of this well-known argument (sketched also in [24]) and we write directly formulas without this smooth approximation. Thus, performing several integrations by parts, we get This shows that To estimate B 1,1 , B 1,3 , B 1,4 , B 1,6 we observe that By using Young inequality, the growth properties of S in (2.10d) and (2.12) we get Similarly we get To estimate B 1,2 and B 15 we observe that, using the algebraic identity (3.6) and the defintion of the tangential derivative, Hence by substituting and again by the same inequalities as before we arrive to the following estimates Collecting all estimates and using that ∇a ∞ ≤ r P ≤ 1, we finally obtain . The quantities that are bounded uniformly in L 2 (Ω P ) are the tangential derivatives of ε Du ε and of F(Du ε ). By definition we have and if we substitute we obtain By choosing first λ > 0 small enough such that λ C < 4 −1 C 0 and then choosing in the local description of the boundary R = R P small enough such that c λ ∇a ∞ < 4 −1 C 0 , we can absorb the first two terms from the right-hand side into the left-hand side to obtain , where now c λ depends on the fixed paramater λ, the characteristics of S and on C 2 . The right-hand side is bounded uniformly with respect to ε > 0, due to Proposition 3.1, proving the assertion of the proposition. Choosing now an appropriate finite covering of the boundary (for the details see also [4]), Propositions 3.1-3.5 yield the following result: Theorem 3.8. Let the same hypotheses as in Theorem 1.3 with δ > 0 be satisfied. Then, it holds 3.2. Passage to the limit. Once this has been proved, by means of appropriate limiting process we can show that the estimate is inherited by u = lim ε→0 u ε , since u is the unique solution to the boundary value problem (1.1). We can now give the proof of the main result Proof (of Theorem 1.3). Let us firstly assume that δ > 0. From Proposition 2.19, Proposition 2.9 and Theorem 3.8 we know that F(Du ε ) is uniformly bounded with respect to ε in W 1,2 (Ω). This also implies (cf. [3,Lemma 4.4]) that u ε is uniformly bounded with respect to ε in W 2,p (Ω). The properties of S and Proposition 2.19 also yield that S(Du ε ) is uniformly bounded with respect to ε in L p ′ (Ω). Thus, there exists a subsequence {ε n } (which converges to 0 as n → +∞), u ∈ W 2,p (Ω), F ∈ W 1,2 (Ω), and χ ∈ L p ′ (Ω) such that u εn ⇀ u in W 2,p (Ω) ∩ W 1,p 0 (Ω) , Du εn → Du a.e. in Ω , The continuity of S and F and the classical result stating that the weak limit and the a.e. limit in Lebesgue spaces coincide (cf. [12]) imply that F = F(Du) and χ = S(Du) . These results enable us to pass to the limit in the weak formulation of the perturbed problem (2.20), which yields where we also used that lim εn→0 Ω ε n Du εn · Dv dx = 0. By density we thus know that u is the unique weak solution of problem (1.1). Finally the lower semicontinuity of the norm implies that Ω |∇F(Du)| 2 dx ≤ lim inf εn→0 Ω |∇F(Du εn )| 2 dx ≤ c, ending the proof in the case δ > 0. Now we can finish the proof in the same way as in the case δ > 0.
6,410.6
2017-12-06T00:00:00.000
[ "Mathematics" ]
Measurement of charged particle multiplicities and densities in pp collisions at √ s = 7 TeV in the forward region Charged particle multiplicities are studied in proton–proton collisions in the forward region at a centre-of-mass energy of √ s = 7 TeV with data collected by the LHCb detector. The forward spectrometer allows access to a kinematic range of 2 . 0 < η < 4 . 8 in pseudorapidity, momenta greater than 2 GeV/ c and transverse momenta greater than 0 . 2 GeV/ c . The measurements are performed using events with at least one charged particle in the kinematic acceptance. The results are presented as functions of pseudorapidity and transverse momentum and are compared to predictions from several Monte Carlo event generators. The phenomenology of soft quantum chromodynamic (QCD) processes such as light particle production in proton-proton (pp) collisions cannot be predicted using perturbative calculations, but can be described by models implemented in Monte Carlo event generators.The calculation of the fragmentation and hadronization processes as well as the modelling of the final states arising from the soft component of a collision (underlying event) are treated differently in the various event generators [1].The phenomenological models contain parameters that need to be tuned depending on the collision energy and colliding particles species.This is typically achieved using soft QCD measurements.The LHCb collaboration reported measurements on energy flow [2], production cross-sections [3,4] and production ratios of various particle species [5] in the forward region, all of which provide information for event generator optimization. A fundamental input used for the tuning process is the measurement of prompt charged particle multiplicities in single pp interactions.In combination with the study of the corresponding momentum spectra and angular distributions, these measurements can be used to gain a better understanding of hadron collisions.An accurate description of the underlying event is vital for understanding backgrounds in beyond the Standard Model searches or precision measurements of the Standard Model parameters.Previous measurements of charged particle multiplicities performed at the Large Hadron Collider (LHC) were reported by the ATLAS [6], CMS [7] and ALICE [8] collaborations.All of these measurements were performed in the central pseudorapidity region.The forward region was studied with the LHCb detector, where an inclusive multiplicity measurement without momentum information was performed [9]. In this paper, pp interactions at a centre-of-mass energy of √ s = 7 TeV that produce at least one prompt charged particle in the pseudorapidity range of 2.0 < η < 4.8, with a momentum of p > 2 GeV/c and transverse momentum of p T > 0.2 GeV/c, are studied. A prompt particle is defined as one that originates from the primary interaction, either directly, or through the subsequent decay of a resonance.The information from the full tracking system of the LHCb detector is used, which permits the measurement of the momentum dependence of charged particle multiplicities.Multiplicity distributions, P (n), for prompt charged particles are reported for the total accessible phase space region as well as for η and p T ranges.In addition, mean particle densities are presented as functions of transverse momentum, dn/dp T , and of pseudorapidity, dn/dη. The paper is organised as follows.In Sect. 2 a brief description of the LHCb detector and an overview of track reconstruction algorithms are provided.The recorded data set and Monte Carlo simulations are described in Sect.3, followed by a discussion of the definition of visible event and the data selection in Sect. 4. The analysis method is described in Sect.5, and systematic uncertainties are given in Sect.6.The final results are compared to event generator predictions in Sects.7 and 8, respectively, before summarising in Sect.9. LHCb detector and track reconstruction The LHCb detector [10] is a single-arm forward spectrometer covering the pseudorapidity range 2 < η < 5, designed for the study of particles containing b or c quarks.The detector includes a high-precision tracking system consisting of a silicon-strip vertex detector (VELO) surrounding the pp interaction region, a large-area silicon-strip detector located upstream of a dipole magnet with a bending power of about 4 Tm, and three stations of silicon-strip detectors and straw drift tubes placed downstream.The combined tracking system provides a momentum measurement with relative uncertainty that varies from 0.4 % at 5 GeV/c to 0.6 % at 100 GeV/c, and impact parameter resolution of 20 µm for tracks with large transverse momentum.The direction of the magnetic field of the spectrometer dipole magnet is reversed regularly.Different types of charged hadrons are distinguished by information from two ring-imaging Cherenkov detectors.Photon, electron and hadron candidates are identified by a calorimeter system consisting of scintillatingpad and preshower detectors, an electromagnetic calorimeter and a hadronic calorimeter.Muons are identified by a system composed of alternating layers of iron and multiwire proportional chambers.The trigger consists of a hardware stage, based on information from the calorimeter and muon systems, followed by a software stage, which applies full event reconstruction. The reconstruction algorithms provide different track types depending on the subdetectors considered.Only two types of tracks are used in this analysis.VELO tracks are only reconstructed in the VELO sub-detector and provide no momentum information.Long tracks are reconstructed by extrapolating VELO tracks through the magnetic dipole field and matching them with hits in the downstream tracking stations, providing momentum information.This is the highest-quality track type and is used for most physics analyses.Requiring charged particles to stay within the geometric acceptance of the LHCb detector after deflection by the magnetic field further restricts the accessible phase space to a minimum momentum of around 2 GeV/c.The LHCb detector design minimizes the material of the tracking detectors and allows a high track-reconstruction efficiency even for particles with low momenta.However, the limited number of tracking stations results in the presence of misreconstructed (fake) tracks.A reconstructed track is considered as fake if it does not correspond to the trajectory of a genuine charged particle.The fraction of fake long tracks is non-negligible as the extrapolation of a track through the magnetic field is performed over a distance of several meters, resulting in wrong association between VELO tracks and track segments reconstructed downstream.Another source of wrong track assignment arises from duplicate tracks.These track pairs either share a certain number of hits or consist of different track segments originating from a single particle. Data set and simulation The measurements are performed using a minimum-bias data sample of pp collisions at a centre-of-mass energy of √ s =7 TeV collected during 2010.In this low-luminosity running period, the average number of interactions in the detector acceptance per recorded bunch crossing was less than 0.1.The contribution from bunch crossings with more than one collision (pile-up events) is determined to be less than 4 % and is considered as a correction in the analysis.The data consists of 3 million events recorded in equal proportion for both magnetic field polarities.The low luminosity and interaction rate of the proton beams allowed the LHCb detector to be operated with a simplified trigger scheme.For the minimum-bias data set of this analysis, the hardware stage of the trigger system accepted all events, which were then reconstructed by the higher-level software trigger.Events with at least one reconstructed track segment in the VELO were selected. Fully simulated minimum-bias pp collisions are generated using the Pythia 6.4 event generator [11] with a specific LHCb configuration [12] using CTEQ6L [13] parton density functions (PDFs).This implementation, called the LHCb tune, contains contributions from elastic and inelastic processes, where the latter also include single and double diffractive components.Decays of hadrons are performed by EvtGen [14], in which final-state radiation is generated using Photos [15].The interaction of the generated particles with the detector and its response are implemented using the Geant4 toolkit [16], as described in Ref. [17].Processing, reconstruction and selection are identical for simulated events and data.The simulation is used to determine correction factors for the detector acceptance and resolution as well as for quantifying background contributions and reconstruction performance. The measurements are compared to predictions of two classes of generators, those that have not been optimized using LHC data and those that have.The former includes the Perugia 0 and Perugia NOCR [18] tunes of Pythia 6, both of which rely on CTEQ5L [19] PDFs, and the Phojet event generator [20].Phojet describes soft-particle production by relying on the dual-parton model [21], which comprises semi-hard processes modelled by parton scattering and soft processes modelled by pomeron exchange.Pythia 8 [22] is available in both classes.An early version of Pythia 8 is represented by version 8.145.In more recent versions, the default configuration has been changed to Tune 4C, which is based on LHC measurements in the central rapidity region.Both Pythia 8 versions utilize the CTEQ5L PDFs.The results of the latest available version, Pythia 8.180, are used to represent Tune 4C.Pythia 8.180, together with recent versions of Herwig++ [23], represent the class of recent event generators.In contrast to the Pythia generator, where hadronisation is described by the Lund string fragmentation, the Herwig++ generator relies on cluster fragmentation and the preconfinement properties of parton showers.Predictions of two versions of Herwig++ are chosen, each operated in the minimum-bias configuration, which uses the respective default underlying-event tune.For Herwig++ version 2.6.3, this corresponds to tune UE-EE-4-MRST, while version 2.7.0 [24] relies on tune UE-EE-5-MRST.Both tunes were also optimized to reproduce LHC measurements in the central rapidity region and rely on the MRST LO** [25] PDF set. Event definition and data selection An event is defined as visible if it contains at least one charged particle in the pseudorapidity range of 2.0 < η < 4.8 with p T > 0.2 GeV/c and p > 2 GeV/c.These criteria correspond to the typical kinematic requirements for particles traversing the magnetic field and reaching the downstream tracking stations.In order to compare the data directly to predictions from Monte Carlo generators without having a full detector simulation, the visibility definition is based on the actual presence of real charged particles, regardless of whether they are reconstructed as tracks or not. The tracks are corrected for detector and reconstruction effects to obtain the distribution of charged particles produced in pp collisions.Only tracks traversing the full tracking system are considered.The kinematic criteria are explicitly applied to all tracks to restrict the measurement to a kinematic range in which reconstruction efficiency is high.No specific quality requirement aimed at suppressing the contribution from misreconstructed tracks is applied.To ensure that tracks originate from the primary interaction it is required that the smallest distance of the extrapolated track to the beam line is less than 2 mm.The position of the beam line is determined independently for each data taking period from events with reconstructed primary vertices.Additionally, a track is required to originate from the luminous region; the distance z 0 of the track to the centre of this region has to fulfil z 0 < 3σ L , where the width σ L is of the order of 40 mm, determined from a Gaussian fit to the longitudinal position of primary vertices.This restriction also suppresses the contamination from beam-gas background interactions to a negligible amount.There is no explicit requirement for a reconstructed primary vertex in this analysis.Together with the chosen definition of a visible event, this allows the measurement to also be performed for events with only single particles in the acceptance. In the simulation, a primary charged particle is defined as a particle that either originates directly from the primary vertex or from a decay chain in which the sum of mean lifetimes does not exceed 10 ps.As a consequence, decay products of beauty and charm hadrons are treated as primary particles. Analysis The measured particle multiplicity distributions and mean particle densities are corrected in four steps: (1) reconstructed events are corrected on an event-by-event basis by weighting each track according to a purity factor to account for the contamination from reconstruction artefacts and non-prompt particles; (2) the event sample is further corrected for unobserved events that fulfil the visibility criteria but in which no tracks are reconstructed; (3) in order to obtain measurements for single pp collisions, a correction to remove pile-up events is applied; (4) the effects of various sources of inefficiencies, such as track reconstruction, are addressed.While correction factors for the multiplicity distributions and mean particle densities are the same, their implementation differs and is discussed in the following. Correction for reconstruction artefacts and non-prompt particles The selected track sample includes three significant categories of impurities: approximately 6.5 % are fake tracks, less than 1 % are duplicate tracks and about 4.5 % are tracks from non-prompt particles.The individual contributions are determined using fully simulated events.Henceforth, all impurity categories are collectively referred to as background tracks. The probability of reconstructing a fake track, P fake , is dependent on the occupancy of the tracking detectors and on the track parameters.The occupancy dependence is determined as a function of the track multiplicity measured by the VELO and as a function of the number of hits in the downstream tracking stations.This accounts for the increasing probability of reconstructing a fake track depending on the number of hits in each of the tracking devices involved.P fake also depends on η and p T ; this is taken into account in an overall four-dimensional parametrisation. Duplicate tracks are reconstruction artefacts, they have only a weak dependence on tracking-detector occupancy but exhibit a pronounced kinematic dependence.The probability of reconstructing a duplicate track, P dup , is estimated as a function of η, p T and VELO track multiplicity. The probability that a non-prompt particle is selected, P sec , is also estimated as a function of the same variables as for duplicate tracks.The predominant contribution is due to material interaction, such as photon conversion, and depends on the amount of material traversed in the detector.Low p T particles are more affected. For each track, a combined impurity probability, P bkg , is calculated, which is the sum of the three contamination types, P bkg = P fake + P dup + P sec , and depends on the kinematic properties of the track, the occupancy of the tracking detectors and the track multiplicity.When measuring the mean particle densities, it is sufficient to assign a per-track weighting factor of (1 − P bkg ) to correct for the impurities mentioned above.However, correcting particle multiplicity distributions in the same way would lead to non-physical fractional event multiplicities.Therefore, the impurity probability, P bkg,i , of each track, is summed for all tracks in an event to obtain a total event impurity correction, µ ev .This corresponds to a mean number of expected background tracks in the event and permits to calculate the probability to reconstruct a certain number of background tracks in each event, assuming Poisson statistics.The number of background tracks k in an event with n ev observed tracks obeys the probability distribution From this relation we derive the probability that an event contains a given number of real primary particles.Summing the normalized probability distribution of all events we obtain the multiplicity distribution corrected for background tracks. Correction for undetected events Defining a visible event based on the properties of the actual charged particles present in the event rather than on the reconstructed tracks introduces a fraction of spuriously undetected events.These are events that should be visible but contain no reconstructed tracks and thus remain undetected.These unobserved events are most likely to occur when few charged particles are within the kinematic acceptance.The reconstruction of a track can fail due to multiple scattering, material interaction, or inefficiencies of the detector or of the reconstruction algorithms.In order to determine the amount of undetected events that nevertheless fulfil the visibility definition, a data-driven approach is adopted.The true multiplicity distribution for visible events, T (n), where n is the number of charged particles, starts at n = 1.Since some of these events have no reconstructed tracks, they follow a multiplicity distribution U (n) starting from n = 0.As an event can only be detected if at least one track is reconstructed, U (0) cannot be determined directly.However, the number of undetected events can be estimated from the observed uncorrected distribution U (n), if the average survival probability, P sur , for a single particle in the kinematic acceptance is known.Assuming that the survival probability, which is determined from simulation, is independent for two or more particles, the observed distribution is approximated in terms of the still unknown actual multiplicity distribution T This equation is only valid under the assumption that reconstruction artefacts, such as fake tracks, which increase the number of observed tracks with respect to the number of true tracks, can be ignored.Following this approach, an event with a certain number of particles is only reconstructed with the same number of tracks or fewer, but not with more tracks.The uncertainties due to these assumptions are evaluated in simulation and are accounted for as systematic uncertainties.Equation 2allows U (0) to be estimated from the true distribution T .All actual elements T (k) can also be expressed using the corresponding uncorrected measured bin U (k) and correction terms of T (n) at higher values of n > k, Combining the formulas in Eq. 3 results in a recursive expression for U (0), which can be calculated numerically up to a given order r.The procedure is tested in simulation, where the estimated and actual fractions of undetected events agree within an uncertainty of 13 %.This is considered as a systematic uncertainty related to the assumptions made in the calculation.The fraction of undetected events obtained for data is 2.3 % compared to 3.1 % in simulation.The fraction estimated in data is added to the measured multiplicity distributions and is also considered in the event normalisation of the mean particle density measurement. Pile-up correction The average number of interactions per bunch crossing in the selected data taking period is small, resulting in a limited bias from pile-up.The measured particle multiplicity distributions are mainly composed of single pp collisions and a small fraction of additional second pp collisions.Therefore events with larger pile-up can be neglected.To obtain the particle multiplicity distribution of single pp collisions the iterative approach used in Ref. [9] is applied.The total effect of the pile-up correction to the mean value of the multiplicity distribution is 3.3 %.The measurements of the mean particle density are normalised to the total number of pp collisions. Efficiency correction and unfolding procedure The final correction step accounts for limited efficiencies due to detector acceptance ( acc ) in the kinematic range of 2.0 < η < 4.8 and track reconstruction ( tr ).For particles fulfilling the kinematic requirements, the detector acceptance describes the fraction that reach the end of the downstream tracking stations and do not interact with material or are deflected out of the detector by the magnetic field.This fraction and the overall reconstruction efficiency are evaluated independently using simulated events.Correction factors are determined as functions of pseudorapidity and transverse momentum.No multiplicity dependence is observed.The mean particle densities are corrected by applying a combined correction factor of 1/( acc tr ) to each track in the same way as described in Sect.5.1. In order to correct the particle multiplicity distributions, an unfolding technique based on a detector response matrix is employed.The response matrix, R m,n , accounts for inefficiencies due to the detector acceptance and track reconstruction.It is constructed from the relation between the distribution of true prompt charged-particles T (n) and the distribution of measured tracks M (m), subtracted for background and pile-up, The matrix is obtained from simulated events.The simulated number of charged particles per event, n, is compared to the corresponding number of reconstructed and background subtracted tracks, m.Thus each possible value of simulated particle multiplicity is mapped to a distribution of reconstructed tracks.For very high multiplicities, the available number of events from the Monte Carlo sample is not sufficient to populate the entire matrix.The mapping is well described by a Gaussian distribution with mean value m and standard deviation σ m .The distribution of m and σ m for a true multiplicity bin n can be parametrized by combinations of polynomial and logarithmic functions.This allows an extrapolation of the matrix up to large values of n and simultaneously suppresses the effect of statistical fluctuations in the entries of the matrix.To extract the true particle multiplicity distribution T (n) from the measured distribution M (m), a procedure based on χ 2 -minimization [26,27] of the measured distribution M (m) and the folded distribution R m,n T (n) for different hypotheses of the true distribution, T (n), is adopted.The range of variation of T (n) is constrained by parametrising the multiplicity distributions.To avoid introducing model dependencies to the unfolded result, six different models with up to eight floating parameters are used.Five models are based on sums of exponential functions combined with polynomial functions of various order in the exponent and as a multiplier.In addition, a model based on a sum of negative binomial distributions is used.While particle multiplicities in η and p T bins can be well described by two negative binomial distributions, this is not sufficient for the multiplicity distribution in the full kinematic range, where this model has not been employed.All the parametrisations used are capable of describing the simulated multiplicity distributions with additional scope to accommodate the data unfolding.The floating parameters of the hypothesis T (n) are varied in order to minimise the χ 2 -function where E(m) represents the uncertainty of the measured distribution M (m).The parametrisation model yielding the best χ 2 -value is chosen as the central result, the other models are considered in the systematic uncertainty determination.Both the binned and total event unfolding procedures using simulated data are found to reproduce the generated distributions satisfactorily.The uncertainty of the unfolded distribution is determined through pseudo-experiments.Each pseudo-experiment is generated from the analytical model with the parameters randomly perturbed according to the best fit and the correlation matrix. The unfolded distribution for the total event is truncated at a value of 50 particles and the binned distributions at a value of 20 particles.This corresponds to the limit where, even with the extended detector-response matrix, larger particle multiplicities cannot be mapped to the range of the measured track-multiplicity distribution. Systematic uncertainties The precision of the measurements of charged particle multiplicities and mean particle densities are limited by systematic effects.The bin contents of the particle multiplicity distribution for the full event typically have a relative statistical uncertainty in the range of 10 −4 to 10 −2 for low and high multiplicities, respectively.The systematic uncertainties are typically around 1 − 10 %, the largest contribution arising from the uncertainty of the amount of detector material.All individual contributions are discussed below. The characteristics and origin of fake tracks is studied in detail in fully simulated events and the correction factors of Sect.5.1 are calculated.The agreement between data and simulation is verified by estimating the fake-track fraction in both samples by probing the matching probability of track segments in the long-track reconstruction algorithm.The results are in good agreement and the differences amount to an overall 2 % systematic uncertainty on the applied correction factors. The systematic uncertainty introduced by differences in the fraction of duplicate tracks in data and simulation is determined by studying the number of track pairs with small opening angles.The observed excess of duplicate tracks in data results in a relative systematic uncertainty on the duplicate-track fraction of 9 %.As the total amount of this type of reconstruction artefacts is small, this results in an overall 0.1 % systematic uncertainty on the final result. Uncertainties introduced by the correction for non-prompt particles depend predominantly on the knowledge of the amount of material within the detector.The agreement with the amount of material modelled in the simulation, on average, is found to be within 10 %.In order to estimate the effects of non-prompt particles still passing the track selection, their composition is studied.Around 40 % of the wrongly selected particles arise from photon conversion and is related to the uncertainty of the amount of material.Another third of the particles are decay products of K 0 S mesons, whose production cross-section has previously been measured by LHCb [3] to be in good agreement with simulation.Around 20 % of the particles originate from decays of Λ baryons and hyperons.These are measured to disagree by approximately 40 % with the production cross-sections used in the simulation.Combining these contributions results in a 12 % systematic uncertainty on the fraction of non-prompt particles. To account for differences between the actual track reconstruction efficiency and that estimated from simulation, a global systematic uncertainty of 4 % in average is assigned [28,29]. The uncertainty on the detector acceptance can be split in two components: the uncertainty on the knowledge of the detector material and the uncertainty related to the requirement for particles to have trajectories within the acceptance of the downstream tracking stations.The momentum distributions of charged particles in data and in simulation are in good agreement, therefore the second effect is negligible.The remaining uncertainty related to material interaction leads to a relative systematic uncertainty on the correction factors of 3 % and is assigned as an individual factor for each track. A modified response matrix is used to estimate the impact on the multiplicity distributions of systematic uncertainties due to the track reconstruction and detector acceptance.The systematic uncertainties of both efficiencies are combined quadratically and result in a 5 % uncertainty on the response matrix.A response matrix with an efficiency decreased by this value is generated.The whole unfolding procedure (Sect.5.4) is repeated with this matrix and the full difference to the nominal result is assigned as uncertainty. Model dependencies due to the parametrisations used to unfold the true particle multiplicity distributions are determined by sampling six different parametrisation models for each of the multiplicity distributions.The model corresponding to the minimum χ 2 value of the unfolding fit is taken as the central result, while the maximum difference in each bin between all models and the central result is taken as the systematic uncertainty.This difference is small compared to the uncertainty due to the modified response matrix. Uncertainties related to the correction for undetected events (Sect.5.2) are dominated by the 13 % systematic uncertainty arising from the assumptions made in the calculation model.In addition, the average survival probability used in this model is affected by uncertainties of the amount of detector material, detector acceptance and track reconstruction efficiency.This sums to a maximum uncertainty of 15 % on the number of undetected events.Only bins from one to three tracks are affected, where the variation is dominated by this uncertainty.For the particle densities, the impact is negligible with respect to other uncertainties.For the particle multiplicity distributions it results in a small change of 0.4 % of the truncated mean. Uncertainties related to the pile-up fraction are evaluated to be negligible compared to all other contributions as the total size of the corrections is already small. The effect of non-zero beam crossing angles is determined to be insignificant, as well as the background induced by beam gas interactions. Charged particle densities The fully corrected measurement of mean particle densities in the kinematic region of p > 2 GeV/c, p T > 0.2 GeV/c and 2.0 < η < 4.8 is presented as a function of pseudorapidity in Fig. 1 and as a function of transverse momentum in Fig. 2; the corresponding numbers are presented in the Appendix.The data points show a characteristic drop towards larger pseudorapidities but also a falling edge for η < 3, which is caused by the minimum momentum requirement in this analysis.This is qualitatively described by all considered Monte Carlo event generators and their tunes. The first group of generators that are compared to our measurements are different tunes of Pythia 6 and Phojet and are shown in Figs.1a and 2a.The default configuration of Pythia 6.426 underestimates the amount of charged particles from roughly 20 % at large η up to 50 % at small η.The descending slopes towards small and large pseudorapidities are also insufficiently modelled.The Perugia NOCR tune shows a slight improvement in shape and in the amount of charged particles; Perugia 0 predicts an even smaller mean particle density over the whole kinematic range.Predictions of the Phojet generator are similar to the tunes of Pythia 6.In this group of predictions, the LHCb tune of Pythia 6 provides the best agreement with the data but still underestimates the charged-particle production rate by 10 − 40 %.This behaviour is also observed in the p T dependence, where all configurations underestimate the number of charged particles.The aforementioned generator predictions were optimized without input of LHC measurements. Predictions from the more recent generators Pythia 8 and Herwig++ are shown in Figs.1b and 2b.Pythia 8.145 with default parameters was released without tuning to LHC measurements and is not better than the LHCb tune of Pythia 6.In contrast, Pythia 8.180, which was optimized on LHC data, describes the measurements significantly better than the previous version.The predictions of Herwig++ are also in reasonably good agreement with data, although the charged-particle production rate is underestimated at small pseudorapidities.The Herwig++ generator version 2.7.0, which uses tune UE-EE-5-MRST, overestimates the number of prompt charged particles in the low p T range but underestimates it at larger transverse momenta.The predictions of Herwig++ in version 2. 6 Summary The charged particle multiplicities and the mean particle densities are measured in inclusive pp interactions at a centre-of-mass energy of √ s = 7 TeV with the LHCb detector.The measurement is performed in the kinematic range p > 2 GeV/c, p T > 0.2 GeV/c and 2.0 < η < 4.8, in which at least one charged particle per event is required.By using the full spectrometer information, it is possible to extend the previous LHCb results [9] to include momentum dependent measurements.The comparison of data with predictions from several Monte Carlo event generators shows that recent generators, tuned to LHC measurements in the central rapidity region, are in better agreement than predictions from older generators.While the phenomenology in some kinematic regions is well described by recent Pythia and Herwig++ simulations, the data in the higher p T and small η ranges of the probed kinematic region are still underestimated.None of the event generators considered are able to describe the entire range of measurements. Figure 1 :Figure 2 : Figure 1: Charged particle density as a function of η.The LHCb data are shown as points with statistical error bars (smaller than the marker size) and combined systematic and statistical uncertainties as the grey band.The measurement is compared to several Monte Carlo generator predictions, (a) Pythia 6 and Phojet, (b) Pythia 8 and Herwig++.Both plots show predictions of the LHCb tune of Pythia 6, which is used in the analysis. Figure 3 :Figure 4 :Figure 5 :Figure 6 : Figure 3: Observed charged particle multiplicity distribution in the full kinematic range of the analysis.The error bars represent the statistical uncertainty, the error band shows the combined statistical and systematic uncertainties.The data are compared to several Monte Carlo predictions, (a) Pythia 6 and Phojet, (b) Pythia 8 and Herwig++.Both plots show predictions of the LHCb tune of Pythia 6, which is used in the analysis. Figure 7 : Figure 7: Observed charged particle multiplicity distribution in different p T bins.Error bars represent the statistical uncertainty, the error bands show the combined statistical and systematic uncertainties.The data are compared to Monte Carlo predictions, (a-c) Pythia 6 and Phojet, (d-f) Pythia 8 and Herwig++.All plots show predictions of the LHCb tune of Pythia 6, which is used in the analysis. .3, which relies on tune UE-EE-4-MRST, show a more complete description of the data.Both event generators, Pythia 8 and Herwig++, describe the data over a wide range. Table 3 : Truncated mean value and root-mean-square deviation for charged particle multiplicities in different η-bins.The range is from 0 to 20 particles.The first quoted uncertainty is statistical and the second systematic.
7,394.8
2014-02-18T00:00:00.000
[ "Physics" ]
Sequential algorithm for life threatening cardiac pathologies detection based on mean signal strength and EMD functions Background Ventricular tachycardia (VT) and ventricular fibrillation (VF) are the most serious cardiac arrhythmias that require quick and accurate detection to save lives. Automated external defibrillators (AEDs) have been developed to recognize these severe cardiac arrhythmias using complex algorithms inside it and determine if an electric shock should in fact be delivered to reset the cardiac rhythm and restore spontaneous circulation. Improving AED safety and efficacy by devising new algorithms which can more accurately distinguish shockable from non-shockable rhythms is a requirement of the present-day because of their uses in public places. Method In this paper, we propose a sequential detection algorithm to separate these severe cardiac pathologies from other arrhythmias based on the mean absolute value of the signal, certain low-order intrinsic mode functions (IMFs) of the Empirical Mode Decomposition (EMD) analysis of the signal and a heart rate determination technique. First, we propose a direct waveform quantification based approach to separate VT plus VF from other arrhythmias. The quantification of the electrocardiographic waveforms is made by calculating the mean absolute value of the signal, called the mean signal strength. Then we use the IMFs, which have higher degree of similarity with the VF in comparison to VT, to separate VF from VTVF signals. At the last stage, a simple rate determination technique is used to calculate the heart rate of VT signals and the amplitude of the VF signals is measured to separate the coarse VF from VF. After these three stages of sequential detection procedure, we recognize the two components of shockable rhythms separately. Results The efficacy of the proposed algorithm has been verified and compared with other existing algorithms, e.g., HILB [1], PSR [2], SPEC [3], TCI [4], Count [5], using the MIT-BIH Arrhythmia Database, Creighton University Ventricular Tachyarrhythmia Database and MIT-BIH Malignant Ventricular Arrhythmia Database. Four quality parameters (e.g., sensitivity, specificity, positive predictivity, and accuracy) were calculated to ascertain the quality of the proposed and other comparing algorithms. Comparative results have been presented on the identification of VTVF, VF and shockable rhythms (VF + VT above 180 bpm). Conclusions The results show significantly improved performance of the proposed EMD-based novel method as compared to other reported techniques in detecting the life threatening cardiac arrhythmias from a set of large databases. Background Ventricular Fibrillation (VF) and Ventricular Tachycardia (VT) are life-threatening cardiac arrhythmias generally observed in adults with coronary artery disease. In 1979, automatic external defibrillators (AEDs) were introduced to accurately analyze the cardiac rhythms and, if appropriate, advise/deliver a high-energy shock to those patients who suffer from coarse VF and VT of a rate above 180 bpm, combinedly known as the shockable rhythms [6]. Though a significant number of works have been published on this topic, the scope for development of more accurate and reliable techniques relaxing assumptions of certain previous works and incorporating features from diverse nature of the cardiographic signals is yet open. Based on separation capability, the algorithms available in the literature can be classified into categories such as, separating VF from VT [4,7,8], VF from normal sinus rhythm (NSR) [9], VF plus VT from nonVTVF [10], shockable rhythms from other ECG pathologies [5,11,12], VF from nonVF [1][2][3][4][13][14][15][16][17][18][19][20][21][22][23][24]. Comprehensively, the last two categories [25] are the most realistic for fruitful hospital management of cardiac abnormalities. To separate VF from VT many efforts have been aimed at characterizing these abnormalities by means of diverse techniques such as the sequential hypothesis algorithm proposed by Thakor et al. [4], continuous wavelet transform [7], paired unipolar electrograms [8] etc. But only separating VF from VT is not useful for cardiac management. Because, in real life problems, other types of abnormalities are also present. A recent work is presented in [9] using the EMD technique to separate VF from NSR which shows almost 100% accuracy. But, when other types of pathology except the NSR and VF are present, poor accuracy is obtained. To separate VT plus VF from other arrhythmias, a time domain based complexity measure algorithm has been proposed in [10]. But it fails to show good performance due to its weakness in selecting a proper threshold value. Another approach has been reported in [5] to classify arrhythmias into two types: shockable and non-shockable signals. This work shows quite good accuracy but improvement area is still open. Various algorithms have been developed for classifying the abnormalities according to the last category. To separate VF from other arrhythmias, different methods were proposed based on different techniques of signal processing, such as the threshold crossing interval (TCI) algorithm [4], auto-correlation function (ACF) [13], probability density function method [14], VF-filter method [15], [16], [17], rate and irregularity analysis [18], [19], sequential hypothesis testing algorithm [20], [21], correlation waveform analysis [22], spectral analysis [3] and four fast template matching algorithms [23]. But these algorithms fail to show good performance when tested on a large database due to the some shortcomings in their reported algorithms. For example, the TCI method, based on a time domain technique, fails to detect the normal sinus rhythm (NSR) signal due to several factors, e.g., choice of 1-s analysis window, improper threshold etc. [24]. An improved version of this algorithm called the threshold crossing sample count (TCSC) method has been reported in [24] by removing some of the drawbacks of the TCI method. But the TCSC algorithm does not consider the shape of the ECG signal, therefore, it fails to classify VT into the nonVF group. On the other hand, the ACF relies on the regularity in NSR and irregularity in VF rhythms [26]. But practically, in most cases, there is no strict regularity found in the NSR signal and, therefore, the detection accuracy of the NSR signal by this method severely falls. The spectral analysis method successfully detects the nonVF signal from ECG arrhythmias. But in the detection of VF, this method shows poor accuracy due to the false detection of the VF signal with low peak frequency in the spectrum [26]. On the other hand, the Hilbert transform (HILB) [1] and phase space reconstruction (PSR) [2] algorithms employing phase space plot of the ECG signal demonstrate improved performance of VF detection. Because the phase space plot is based on the histogram of a signal, it does not consider the shape of this signal. Thus, to separate VT from VF when other arrhythmias are also present, these two methods are not very suitable. In this paper, we propose a sequential detection algorithm based on the mean absolute strength and certain low-order intrinsic mode functions (IMFs) of the EMD analysis of the signal along with a simple rate determination technique. In our proposed algorithm, we not only separate VF but also VT from other arrhythmias. VT plus VF (VTVF) is separated from other arrhythmias in the first stage using an index called the mean absolute value (MAV). Then we decompose the VTVF signal into IMFs using the EMD technique to discriminate VF from VT. EMD was introduced in [27] for processing signals from nonlinear and non-stationary processes. Here, we apply the EMD technique to biomedical signals and particularly for ECG analysis. Next, a simple rate determination algorithm is utilized to classify VT according to the heart rate and to separate coarse VF from fine VF, amplitude of the VF signals are measured. Finally, this sequential ECG arrhythmias classification approach is interpreted as three different detection schemes, such as, VTVF from nonVTVF; VF from nonVF; shockable from non-shockable rhythms. While proposing an algorithm for detecting the shockable rhythms special care must be taken to make the specificity high. It will then ensure the false alarm generation probability of the AEDs low. But an algorithm with high specificity generally results in low sensitivity. To mitigate this contradictory requirement, detection of the shockable rhythms using a sequential algorithm is found to be more effective. At last, in the 'Results' Section, we compare our algorithm with different well-known algorithms available in the literature. ECG signals We use the MIT-BIH Arrhythmia Database (MITDB) [28], Creighton University Ventricular Tachyarrhythmia Database (CUDB) [29] and MIT-BIH Malignant Ventricular Arrhythmia Database (VFDB) [30] to evaluate our algorithm. The MITDB contains 48 files, 2 channels per file, each channel 1805 seconds long. The CUDB contains 35 files, 1 channel per file, each channel 508 seconds long. The VFDB contains 22 files, 2 channel per file, each channel 2100 seconds long. In our analysis, we choose episodes of 8-s long from the whole MIT-BIH arrhythmia and CU databases. We perform a continuous analysis by taking the data in steps of 1 sec. Thus, the total number of 8-s episodes collected from the MITDB and CUDB are (1805-7) × 48 × 2 = 172608 and (508-7) × 35 = 17535, respectively. Since, the VFDB includes ECG recordings of subjects who have experienced episodes of sustained VT and VF, we use this database for VF and VT episodes. By taking the ECG signal in steps of 1 sec we choose 4000 episodes of VF and 4000 episodes of VT from this database. Therefore, a total of 172608 + 17535 + 4000 + 4000 = 198143 episodes are used to compare our algorithm with other algorithms. Amongst these 198143 episodes, we have noticed some episodes which are annoted as the noise signals. Since, in this work we have no interest in these noise signals, we have omitted these noise episodes. Also, analysis of the distinct mode asystole signal is not presented here. Therefore, this type of ECG signal is not included into our complete dataset. The complete dataset includes the following types of ECG signals. To determine the discriminating threshold and verify its effectiveness, the complete dataset is divided into two subsets: training and test datasets. The training dataset is used to determine the thresh-old value. To check the efficacy of the threshold value determined from the training dataset, the test dataset is used. Both the datasets include all types of above mentioned rhythms. The training dataset includes: • 'Asyst': asystole; ECG signal with a peak-to-peak amplitude of < 100 μV, lasting more than 4 s. • 'fine VF': any VF signal with an amplitude in the range 100 -200 μV. It is clear from this classification that VT is divided into two categories according to heart rate; 'VT-hi' and 'VT-lo'. This VT classification considers border heart rate as 180 bpm. It is, however, not strict. It may be in the range 150 -180 bpm. AEDs only advise/deliver shock to shockable rhythms, and intermediate rhythms are treated in a different way called anti-tachycardia pacing. Detection of VTVF from other arrhythmias To detect the life threatening cardiac arrhythmias, VT and VF, from other arrhythmias, we propose to use a property that does not match with that of any nonVTVF signal. Typical ECG waveforms of NSR, VT and VF are given in Figure 1. Here, NSR is treated as the representative of nonVTVF signals. The three waveforms are plotted in the same scale. From this figure we see that the width of the QRS complex is different for different arrhythmias. For NSR, it is noticed that the QRS interval is normally 0.06 -0.10 sec and in case of VT, the QRS complex is more wider (> 0.10 sec). In VF, no QRS complex is noticed. On the other hand, P waves are normal (upright and uniform) in the NSR waveform and in case of VT and VF signal, no P waves are observed [31]. The distinguishable morphological characteristics of these three groups, namely nonVTVF, VT and VF can be quantified using a term called the absolute strength of a signal. The absolute strength or the mean of absolute value (MAV) of a signal x(n) of length N is defined as Here, n stands for the number of samples within the chosen length. In case of NSR, the main representative of the nonVTVF group, the duration of the QRS complex is small as compared to one ECG period as illustrated in Figure 1(a). It is also observed from this figure that the NSR signal level is low for most of the time in an ECG cycle. Therefore, the absolute signal level of the QRS complexes dominates in the summation of MAV calculation (eqn. (1)). A low MAV is thus obtained for such episodes. In case of VT, we see that the QRS complex is much wider than that of NSR, and the ECG signal hardly goes through the baseline as is the case for VF. Therefore, the MAV of VT and VF for a fixed duration window is comparatively larger than that for the NSR. Before calculating the total MAV of a ECG signal, first it is necessary to normalize the ECG signal because the ECG signals collected from the different databases have different dynamic value. Another important thing to be noted is that, to use the MAV as the threshold parameter, we need to properly choose the analysis window duration. To understand the reason behind the necessity to appropriately choose the analysis window duration, consider a normalized VF episode of 8-s length from cu01m file of CU database shown in Figure include a damped VF signal, as shown in figure 2, where most of the signal samples fall in the low amplitude range, and the MAV becomes low (e.g., 0.2577). Therefore, it is necessary to make the analysis window length small. Choosing an analysis window of too small duration (say, 1-s) creates the same problem as observed in the TCI method. Here, we choose the 2-s window for analysis. After calculating the MAV of this 2-s analysis window, we shift the analysis window by 1-s successively for other segments of 2-s within the 8-s ECG episode and calculate the MAV again. After completion of shifting the analysis window to cover the whole decision frame, we average all the MAV s found in each stage and finally MAV = 0.34 is found which is higher than that obtained for the 8-s analysis window. In this way, by appropriately selecting the analysis window length in calculating the MAV , we can overcome the effect of damped behavior of the ECG signal. Observation of other nonVTVF ECG waveforms such as Premature Ventricular Contraction (PVC), Premature Atrial Contraction (PAC), Supraventricular Tachycardia (SVT) etc. reveals that these abnormalities also have low MAV compared to VT and VF. For example, PVC arrhythmia has small MAV because a PVC beat contains only wide QRS complex and no P waves or T waves are associated with this abnormal beat [31]. Thus, we can use MAV as the performance index to discriminate the VTVF from other arrhythmias. In ECG analysis, it is important that we choose the episode length or decision frame appropriately. Decision frame should be taken in such a way that is neither too short to make a false alarm nor too long to cause severe cardiac arrest. Decreasing the episode length from its optimum value results in a low accuracy but quick detection. On the contrary, increasing the episode length improves the accuracy up to a certain level but requires longer detection time. The whole process of separating VT plus VF from other arrhythmias can be described as in the following: 1. Choose a segment of ECG signal of L e -second duration. This segmented ECG signal of L e -second duration should be stored for the second stage. 2. The segment of the ECG signal is preprocessed using the well-known filtering process as used in [32], which is carried out in a MATLAB routine, called filtering. m [33]. The filtering algorithm works in four successive steps. • First, the mean value is subtracted from the signal. • Second, a moving average filter is applied in order to remove the power line noise. • Third, a drift suppression is carried out by a high pass filter with a cut-off frequency of 1 Hz. • In the last step, a low pass Butterworth filter with a cut-off frequency of 30 Hz is applied in order to suppress the high frequency noise like interspersions and muscle noise. All filters in the preprocessing step is implemented using the Matlab routine 'filtfilt' function. 3. Then, choose a smaller segment x(n) from the ECG signal of L e -second duration in such a way that the length of the segment is 2-s. If the sampling frequency of the ECG signal is F s samples/s, then the total sample within this segment (N) is 2F s . For example, the sampling frequency of the ECG signal of the MITDB is 360 smaples/sec. Thus the length of the smaller segment N is 2 × 360 = 720 samples. 4. Next, divide the smaller segment x(n) by the maximum absolute value found in that segment. 5. Calculate the MAV using (1). 6. Shift the window by 1-s successively for other segments of 2-s within the L e -second ECG episode and go through step (4) to (5). 7. Make decision on every L e -second ECG episode (L e ≥ 2) by averaging the L e -1 consecutive values of MAV obtained from the L e -1 consecutive 2-s segments with 1-s step. The average value, MAV a for an L e -second episode is calculated as where MAV i is the value of MAV in the i-th 2-s stage. We calculate the MAV a of the three pathologies shown in Figure 1 and are obtained as 0.0765 (NSR), 0.3954 (VT) and 0.4116 (VF). To verify the effectiveness of the MAV index for separating the non-VTVF arrhythmias from the VTVF arrhythmias, other nonVTVF representatives namely, left bundle branch block beat, nodal (junctional) premature beat (rate ≈ 100 bpm), high rate supraventricular tachycardia (rate ≈ 100 bpm), premature ventricular contraction, right bundle branch block beat and paced beat are chosen from the ECG databases. These six pathologies are demonstrated in respectively. Certainly, there is a clear separation of these MAV a values with those obtained from VT and VF episodes. If MAV a is greater than a certain threshold MAV d , VTVF is detected. To determine the thresh-old value, training dataset is used. Figure 4 shows the probability distribution of MAV a of the training dataset and the test dataset. The threshold value is selected from the probability distributions of the training dataset shown in Figure 4(a) and we have chosen MAV d = 0.27 for L e = 8-s to ensure high specificity and also good sensitivity. It is also noticed from Figure 4(b) that when we apply this threshold to the test dataset, high accuracy is still obtained. Separation of VF from VTVF Now that we have separated VTVF from other arrhythmias. In this stage, we separate VF from VT. Before we explain our motivation for using the EMD technique, we briefly describe what EMD is. EMD Preliminaries EMD is a signal decomposing method which is fully data-driven and does not require any a priori basis function [27,34]. The aim of the EMD is to decompose the signal into a sum of intrinsic mode functions (IMFs). An IMF is a function that satisfies two conditions: (1) in the whole data set, the number of extrema and the number of zero crossings must either be equal or differ at most by one; and (2) at any point, the mean value of the envelop defined by the local maxima and the envelop defined by the local minima is zero. An IMF represents the oscillatory mode embedded in the data as a counter-part to the simple harmonic function used in Fourier analysis [35]. Given a signal x(n), the starting point of the EMD is the identification of all the local maxima and minima. All the local maxima are then connected by a cubic spline [36] curve as the upper envelop e u (n). Similarly, all the local minima are connected by a spline curve as the lower envelop e l (n). The mean of the two envelops is denoted as m 1 (n) = [e u (n)+e l (n)]/2 and is subtracted from the signal. Thus the first component h 1 (n) is obtained as The above procedure to extract the IMF is called the sifting process. Ideally, h 1 (n) should be an IMF, as the construction of h 1 (n) seems to have been made to satisfy all the requirements of IMF. Since h 1 (n) still contains multiple extrema in between zero crossings, the sifting process is performed again on h 1 (n). This process is applied repetitively to the proto-IMF h k (n) until the first IMF c 1 (n), which satisfies the IMF condition, is obtained. Couple of stopping criteria are used to terminate the sifting process [27]. A commonly used criterion is the value of standard deviation, SD, computed from the two consecutive sifting: where, N is the total number of samples in x(n). When the SD is smaller than a threshold, the first IMF c 1 (n) is obtained. Then c 1 (n) is separated from the rest of the data by It is to be noted that the residue r 1 (n) still contains some useful information. We can therefore treat the residue as a new signal and apply the same sifting process to obtain r n c n r n i q The whole procedure terminates when either the component c q (n) or the residue r q (n) becomes very small or when the residue r q (n) becomes a monotonic function. Combining (5) and (6) The results of the decomposition are qintrinsic modes and a residue. The lower order IMFs capture the fast oscillation modes while the higher order IMFs typically represent the slow oscillation modes present in the underlying signal [27,37]. An example illustrating the Empirical Mode Decomposition is given in the 'Appendix' section. As mentioned earlier, the VT waveform contains the QRS complex but it is absent in the VF wave-form. The asymmetry of the QRS complex with respect to the baseline gives rise to asymmetric signal envelopes which are comprised of local maxima and minima. Another interesting thing to be noted is that in case of VT, comparatively short duration of the QRS complex results in a wideband ECG signal. On the other hand, the QRS complex is absent in VF and as a result this pathology has more symmetric envelopes than do other abnormalities and thus possesses narrowband characteristics. Therefore, to separate VF from VT, the EMD technique can effectively use the factors of narrowband/wideband characteristics and symmetry/asymmetry property of a signal's envelopes. Now, we apply the EMD technique on a VF episode to decompose it into IMFs and plot the original ECG signal x(n) along with its first IMF as shown in Figure 5 (a). From Figure 5(a) we can say that in case of VF, its first IMF is very much close to the original ECG signal. This is be-cause the VF has certain properties that well match the properties of the IMF as stated above. As the EMD technique cannot decompose an IMF signal further, therefore, in case of a VF episode, there is a unique relationship between the ECG signal and its first IMF. Here, unique relationship means that the original ECG signal and its first IMF is very much similar. In some cases high frequency noise still remains in the ECG signal after preprocessing. Therefore, when we apply EMD to decompose the VF signal, the first IMF captures this high frequency noise as the fast oscillation mode illustrated in Figure 5(b). To overcome this effect we consider the sum of the first two IMFs instead of using only the first one. We can observe from Figure 5(b) that unique relationship still exists between the ECG signal and the sum of first two IMFs for the VF episode. In case of VT, this unique relationship or similarity between the ECG signal and the sum of its first two IMFs does not hold as illustrated in Figure 6 for both noise free and noise corrupted VT signals. To exploit the property of unique relationship between the ECG signal and the sum of its first two IMFs that exists in case of the VF only, sum of the first two IMFs from the ECG signal is subtracted and the MAV of the difference signal is calculated. Since, the dynamic range of the ECG signal varies from database to database, we normalize this MAV with respect to the original ECG signal. In case of a VF episode, the normalized MAV or NMAV of the difference signal is very small than that of a VT episode. Here, we choose a 2-s analysis window as in the previous case. But in this case, the performance index (NMAV ) is less sensitive to the analysis window length. The process of detecting VF from VTVF can then be described as below: 1. First, choose a segment x(n) of duration 2-s and N samples from the previously saved ECG signal of L e -second duration. 2. At this stage, the ECG signal is preprocessed in three successive steps. • First, the mean value is subtracted from the signal. • Second, a drift suppression is carried out by a high-pass filter with a cut-off frequency of 1 Hz. • In the last step, a low-pass Butterworth filter with a cut-off frequency of 20 Hz and order 12 is applied to suppress the high frequency information. 3. Apply EMD on x(n) and determine imf n imf n imf n where NMAV i is the value of NMAV in the i-th 2-s stage. Applying the above stated process, the NMAV a are obtained as 0.08 (for Figure 5(b)), 0.97 (for Figure 6(a)) and 0.93 (for Figure 6(b)). If NMAV a is less than a certain threshold NMAV d , VF is detected, otherwise VT is detected. The threshold value NMAV d is selected by a process as described before using the training dataset. As in this stage we separate VF from VT, therefore, the training and test datasets include only VF and VT episodes. This threshold value is then applied to the test dataset. Figure 7 shows the probability distribution of NMAV a of the training dataset and the test dataset. From the training dataset, we have chosen NMAV d = 0.65 for L e = 8-s to ensure that both VF and VT detection accuracies are good. It is also noticed from Figure 7(b) that the threshold value calculated from the training dataset can be applied to the test dataset maintaining almost the same accuracy as found from the training dataset. Classification of VT and VF according to the AHA recommendations As only the certain classes of VTs and VFs require high-energy shock for treatment, it is necessary to classify the VT and VF according to the heart rate and amplitude, respectively. Since, the heart rate calculation is complicated than the amplitude determination, hence at first we propose a technique to determine the heart rate. The heart rate in bpm is defined as the number of QRS complexes that occur in 60 sec. To determine the heart rate of an ECG signal, first derivative of the ECG signal is utilized. The reason behind the choice of the first derivative of the ECG signal is to utilize the high slope of the QRS complex. Figs. 8(a) and 8(b) show the VT signal and its first derivative. Figure 8(b) illustrates that when QRS complexes occur, correspondingly there is a high value (both in positive and negative part) in the first derivative signal. We consider only the positive part of the first derivative signal. Then this signal is filtered to enhance the QRS complexes further. From this filtered signal shown in Figure 8(c), the heart rate is easily calculated. The whole process of determining the heart rate of the ECG signal is described below: 1. First, choose a segment x(n) of duration L e -second and N samples from the previously saved ECG signal and then perform preprocessing as stated in Section. 2. Calculate the first derivative (x d (n)) of x(n). x n x n x n d ( ) The waveform of x d (n) is shown in Figure 8 x n x n if x n otherwise dp d d Apply the moving average filter on x dp (n) and find x dpf (n). where, a = F s /10 and F s is the sampling frequency. If a is not an integer, then it is rounded to the nearest integer value. The waveform of x dpf (n) is shown in Figure 8(c). 5. Determine the maximum value (C) and the corresponding peak index (I) of x dpf (n) and calculate the threshold value (T h ) from C. where, b is a properly chosen constant. Here, we choose b = 0.25. 6. Store the peak index (I) and mask x dpf (n) around this position. where, γ = F s /8; if γ is not an integer, then it is rounded to the nearest integer value. 7. Now, calculate again the maximum value (C) of x dpf (n) and go through step (vi) until C goes below the T h . 8. Determine the total number of peaks (N p ) those are above T h and calculate H R . If the heart rate of the VT signal is greater than 180 bpm, then this VT is called the shockable VT. As the decision of shockable or intermediate VT is dependent on the heart rate of the episode, hence, we calculate the total number of QRS beats in a episode. Now, to check the efficiency of the heart rate determination algorithm, two episodes selected are shown in Figs. 8(e)-(f). At first, the total number of QRS beats in these episodes are determined from the annotation. Then, the proposed derivative based heart rate determination algorithm is used to calculate the total number of QRS beats and it is found to be 15 beats for Figure 8(e) and 17 beats for Figure 8(f). In both cases, the total number of QRS complexes obtained by using our algorithm are the same as determined from the annotation. Thus, this heart rate determination method, though simple, may be used to calculate the heart rate of an ECG episode. However, in more complicated cases any standard heart rate determination algorithm re-ported in the literature [38,39] may be adopted to classify the VT. On the other hand, the amplitude of the VF signal is determined by taking the maximum value of the absolute VF signal within a episode. If the amplitude is greater than 200 μV, than this VF is called the coarse VF. Quality Parameters The quality parameters, we have used for the assessment of algorithms, are sensitivity, speci city, positive predictivity, and accuracy. For 'VTVF' detection, the first four parameters are defined by Sensitivity No of detected VTVF No of true VTVF = . " " For 'VF' and 'shockable rhythm' detection, the definition of these four quality parameters contain 'VF' and 'shockable rhythm' in place of 'VTVF', respectively. While calculating these four quality parameters to judge the effectiveness of an algorithm, in case of any unsatisfactory results obtained, the values of the respective thresholds were adjusted in order to obtain the best possible results. Results and Discussion The full classification of different ECG pathologies is shown in Figure 9. To compare our algorithm with other reported algorithms in the literature, our classification approach can be interpreted as three different ECG arrhythmias identification schemes; such as 1. VTVF and nonVTVF 2. VF and nonVF (nonVTVF+VT) 3. shockable (VF+VT above 180 bpm) and non-shockable (nonVTVF+VT below 180 bpm) Since the annotated files do not contain enough low-amplitude signals (fine VF), therefore, this type of signal is not addressed in this identification scheme and the VF signals in the shockable rhythms are actually coarse VF. Now, this section is divided into three subsections and each subsection presents the results of each identification scheme. Detection of VTVF from other arrhythmias First, we test the separability of our algorithm between the two classes of ECG signals, i.e., 'VTVF' and 'nonVTVF' against the annotated decisions suggested by the cardiologists in the respective databases. We compare our algorithm with the complexity measure algorithm [10] and the results are shown in Table 1. Comparative results illustrate that our algorithm shows better performance than the complexity measure algorithm. Also notice that the accuracy of the proposed MAV scheme is significantly higher than that of the complexity measure algorithm. Thus our simple and fast algorithm can separate VTVF from nonVTVF with higher specificity and sensitivity simultaneously. In this case, we had to change the threshold value of the CPLX algorithm from that defined in [10] to obtain higher sensitivity. Detection of VF from other arrhythmias VF occurs at the clinically crucial stage of human being. As mentioned earlier, while detecting VF from the other arrhythmias in the first stage, we should make the specificity high because a low specificity may risk patient's life by generating a false alarm to provide a high energy shock as treatment to save his/her life. But our proposed sequential algorithm leads to a high specificity. Quality parameters of our proposed algorithm with some other well-known algorithms are shown in Table 2. It is clear that our algorithm shows higher sensitivity compared to all other algorithms with very good specificity (99.32%). To obtain higher specificity, we had to change the critical threshold parameter of the HILB and PSR methods from that defined in the respective papers. To compare different methods independent of the value of the decision thresholds, the critical threshold parameter in the decision stage of the algorithm is varied. By varying the threshold, we can vary specificity and sensitivity as shown in Table 3. This table illustrates that our proposed method performs much better than the other VF detection algorithms. Detection of shockable rhythms from other arrhythmias This subsection presents the results of our last identification scheme which classifies the ECG pathologies into two groups: shockable and non-shockable rhythms. To compare our algorithm with the reported algorithm in [5], some modifications in the threshold values are made to accommodate unequal episode lengths (L e ). Our proposed algorithm considers L e = 8-s where L e = 10-s was considered in [5]. Modifications are shown in Table 4. For example, if Count1 < 250 for L e = 10-s, then for L e = 8-s Count1 < 250 10 * 8 or Count1 < 200. Here, as we concentrate only on the shockable and non-shockable rhythms, some classification errors may not result in detection errors. For example, from Figure 10 we see that a classification error occurs when a VT above 180 bpm is falsely detected as VF in the second stage. Since a VT above 180 bpm (one type of shockable rhythms) is falsely mapped into the VF group, which is also in the class of shockable rhythms, therefore, this classification error does not make any detection error as long as shockable rhythm is our concern. Using the modified threshold values mentioned in Table 4, the results obtained are presented in Table 5. As can be seen, our algorithm performs better than the count [5] algorithm in every index in detecting the shockable rhythms correctly. Conclusions A novel method for the identification of life threatening cardiac abnormalities from other arrhythmias has been presented. Performing sequential signal processing, we have detected these cardiac abnormalities with good accuracy. It has been shown that the proposed algorithm based on the MAV parameter and EMD technique can detect the VT plus VF signals correctly from other arrhythmias, and the accuracy level remains higher than that of other reported techniques. The effectiveness of the proposed technique has been demonstrated using standard databases over a vast range of both normal and abnormal ECG records. The MAV index successfully separates the VTVF arrhythmias from different types of abnormalities. And the other parameter NMAV which is calculated using the IMFs of the EMD technique can successfully separate VF from VTVF. Finally, a fast and simple heart rate determination technique is used to separate the high rate VT. Consistent results have been obtained by applying our algorithm on different well-known databases namely, MIT-BIH database, CU database and MIT-BIH Malignant Ventricular Arrhythmia database. Determination of the threshold parameters from the training dataset and then their successful application on the test dataset proves that the proposed parameters are universal. Some signal episodes were very difficult for classification even by expert cardiologists. Accuracy of our proposed technique slightly falls due to these confusing episodes. The algorithm presented here has strong potential to be applied in clinical applications for accurate detection of life threatening cardiac arrhythmias.
8,548.4
2010-09-04T00:00:00.000
[ "Engineering", "Medicine" ]
A pragmatic study of congratulation strategies of Pakistani ESL learners and British English speakers People usually express their feelings and emotions positively to others when they have happy occasions. However, the ways of expressing congratulation may vary because the expressive speech act “congratulations” is not the only way to express happiness and share others their happy news. The present study investigates the congratulation strategies of Pakistani English as second language (ESL) learners and British English speakers under the influence of social distance variable. A quantitative approach is applied in the analysis with the frequency of strategies (semantic formulas) being numerically analyzed. The current study recruited 120 participants, and who were further divided into four different groups: 30 British English speakers, 30 Pakistani ESL learners in the elite class, 30 Pakistani ESL learners of the middle class, and 30 Pakistani ESL learners in the lower class. For data collection, a discourse completion test (DCT) was used as a tool. The findings reveal that the most frequently used types of congratulation strategy are illocutionary force indicating device (IFID) followed by overlapped strategies (a combination of two), an offer of a good wish, expression of happiness, request for information, encouragement, expression of surprise, and suggestion of celebration, while other types of strategies are not used by the participants. The study reveals the existence of crosscultural differences in the use of congratulations by Pakistani ESL learners. The findings further show how the middle and lower class of Pakistani ESL learners use a more elaborated form of compliment responses (CRs) as compared to Pakistani ESL learners of elite and British English speakers. The findings may help in understanding the pragmalinguistic and sociopragmatic aspects of Pakistani ESL learners as compared to British English speakers. Theoretical background This study draws on the two pragmatic theories of Speech Acts (Austin, 1975;Searle, 1979), and Politeness (Thomas, 1983). A speech act is an utterance (cf. sentence) that "[is] geared towards doing things" (Wardat & Alkhateeb, 2020, p. 15). Austin (1962), in his articulation of Speech Act Theory, reported that when people utter words, they perform an act. Based on their intuition, Austin (1962) and Searle (1979) proposed that speech acts are universal and that they are realised by universal structures. Rosaldo (1982, p. 228) points out that Searle, a student of Austin's, "uses English performative verbs as guides to something like a universal law. " However, various cross-cultural investigations have shown that the universality of speech acts is far from reality and that both pragmalinguistic and sociopragmatic variations do exist in the realisation of speech acts (e.g., Blum-Kulka, 1987;Wierzbicka, 1987). More importantly, studies in the area of interlanguage pragmatics have found that communication breakdowns often result from a pragmatic failure in the realisation of speech acts (e.g., Avazpour, 2020). The second theory that directly pertains to this study is Politeness Theory (Brown et al., 1987;Thomas, 1983). Politeness may be defined as the act of behaving appropriately towards others. Brown et al. 's (1987) theory of politeness is premised on the concept of face-which was introduced by Goffman (1967). Brown et al., (1987, p. 61) argue that face refers to "the public self-image that every member wants to claim for himself " and that face can be of two related types: negative and positive. Negative face refers to "the want of every 'competent adult member' that his actions be unimpeded by others" (p. 62) while positive face refers to "the want of every member that his wants be desirable to at least some others" (p. 62). In their theorizing of politeness-and based on their view of face- Brown et al. (1987) suggested two types of politeness: positive politeness and negative politeness. Positive politeness is related to the positive face of the hearer while negative politeness is exercised to redress the hearer's negative face. Thus, when speakers perform an act in which they do not respect either the hearer's positive or negative face, they are in fact threatening the hearer's face (i.e., performing an FTA). FTAs can be defined as "those acts that by their nature run contrary to the face wants of the addressee and/or of the speaker" (Brown et al., 1987, p. 65). Brown et al. (1987) make a distinction between "acts that threaten negative face and those that threaten positive face" (p. 65). The former refers to "those acts that primarily threaten the addressee's (H's) negative-face want, by indicating (potentially) that the speaker (S) does not intend to avoid impeding H's freedom of action" (p. 65) while the latter refers to "those acts that threaten the positive-face want, by indicating (potentially) that the speaker does not care about the addressee's feelings, wants, etc.-that in some important respect he doesn't want H's wants" (p. 66). Another very interesting aspect investigated in the current study is social distance variable. Social distance has been studied as an important sociolinguistic variable in the analysis of speech behavior (Wardat & Alkhateeb, 2020) within speech act and politeness Page 4 of 22 Saleem et al. Asian. J. Second. Foreign. Lang. Educ. (2022) 7:8 theories. The concept, in its simplest form, is a measure of the degree of friendship/intimacy (or absence thereof ) between interlocutors (see Joseph & Alexander, 2018). Social distance is one of the foremost factors that determines the way in which interlocutors converse precisely because it is an important determinant of the degree of comfort or politeness/deference in a verbal exchange (Saleem et al., 2021a). This, in turn, determines the constraints felt and the liberties taken in speech exchanges (Allami & Nekouzadeh, 2011). Regarding specific speech acts, there are those that are used most often among friends and acquaintances (e.g., compliments) and others that are rarely seen among this group (e.g., expressions of disapproval). In research on speech behavior, the social distance variable has perhaps been most extensively explored in the work of Wolfson (1986;as cited in Tsoumou, 2020). Wolfson's (1986) empirical and theoretical work derives from her in-depth study of the two speech acts, invitations, and compliments. Her findings on these two speech acts indicate that they are used as social strategies with the goal of opening conversations, establishing points of commonality, affirming or reaffirming solidarity, and deepening friendships. According to Taguchi (2018), interlocutors who are already acquainted have the greatest likelihood of developing a friendship (closing the social distance gap) based on such solidarity-establishing speech behaviors as compliments and invitations. In Al-Zubaidi (2017) analysis of the use of requests and invitations, both appeared in abundance among friends and acquaintances but were infrequent among either strangers or intimates. To explain more clearly, if we view the social distance scale as a continuum, we would find complete strangers at one extreme and intimates at the other end, with friends and acquaintances nearer to the middle. The categories of "strangers", "friends", and "intimates" are not discrete categories but are points along this continuum. These three principal points along the continuum are highlighted in the present study in order to achieve consistency with other speech act research that has studied the social distance variable. The work of Lect and Abdulkhaleq (2020) as well as that of Wardat and Alkhateeb (2020) and others used these broad categories of social distance relationships in their data analysis. Previous studies on congratulation speech act A careful look at research on speech acts shows that studies on the speech act of congratulating are few and that the majority of these studies were conducted in non-Arab contexts. For example, Allami and Naeimi (2011) is a study that was conducted in Iran in order to explore how Persian speakers offer congratulations and the strategies they use to do so. Elwood's (2004) framework was used to classify congratulating strategies that Persian speakers use in various situations. Twenty-five men and 25 women of different socio-economic backgrounds were recruited to take part in the study. They were asked to complete a Discourse Completion Test (DCT) that included different situations of happy occasions. The results revealed that giving gifts to the listener, joking, white lies, and exaggeration were the most frequent strategies while wedding and grant was only marginally used. Within the Duhok speech community, Khalil (2015) explored the use of the speech act of congratulating by a group of Kurdish students. His study relied on face-to-face meetings of special occasions with a particular focus on gender differences. The researcher Page 5 of 22 Saleem et al. Asian. J. Second. Foreign. Lang. Educ. (2022) 7:8 sought to understand how male and female Kurdish students offer congratulations in various occasions and to find out the differences and similarities between the two groups in the use of congratulating strategies following the taxonomy proposed by Elwood (2004). The results indicated that the most favourable strategy used by male students was the expression of thanks and wishes which was found in different occasions. However, the Kurdish female students preferred to use the thanks expression. The researcher concluded that gender was an indicator of the frequency and type of strategies used. The review of related literature has also shown that, of the studies that were conducted, some were cross-cultural studies on the speech act of congratulating. For example, Nasri et al. (2013) investigated the strategies used to congratulate others within three different speech communities, viz. American, Armenian and Iranian. The researchers used a DCT to collect the participants' responses. They also relied on Elwood's (2004) framework to classify the strategies. One hundred and twenty participants were involved in this study, 40 for each group (male and female). In an early phase of the study, the researchers chose 15 speakers from each of the three language groups in order to assist them in deciding on situations that warrant congratulating; the result was marriage and birth of baby. The study included the variable of status in classifying the strategies. The findings revealed that both the Americans and Persians used illocutionary force indicating devices (IFIDs) and offer of good wishes as the most frequent formulas used by both groups. However, the strategies Armenians speakers used related mainly to expression of happiness. The researchers concluded that strategy use is highly related to status of the hearer. In another cross-cultural investigation of the speech act of congratulating, Can (2011) examined the conceptualisation of congratulating in the British and Turkish cultures. Adopting the Natural Semantic Metalanguage Approach, the researcher ventured to explore the types of strategies used by people in the two cultures when congratulating others. The analysis was conducted using a mixed-method approach utilising both qualitative and quantitative measures. The results of the study revealed that although there were some cultural differences between the two sets of data, there appeared to be some similarities in the conceptualisation of congratulating in both the situation and strategy employed. In Pakistani context, Aziz et al. (2018) investigated the speech act of congratulating within the Pakistani Urdu speaking community. The researchers aimed at examining the congratulating strategies used by Pakistani English as a foreign language (EFL) graduate students and the types of positive politeness strategies in the students' responses. They adopted Elwood's (2004) taxonomy of congratulating strategies and an adapted version of Brown et al. 's (1987) framework. The findings showed that illocutionary force indicating device, offer of good wishes and expression of happiness were the strategies most frequently used by the students. At the end of the article, the researchers called for more studies on the speech act of congratulating within the Pakistani speech community and more so cross-culturally. The present study represents a response to their call and investigates cross-cultural differences in the use of the speech act of congratulating in Pakistani English and British English. Research methodology The current study focused on the crosscultural investigation of congratulating strategies of Pakistani ESL learners and British English speakers. The study followed a quantitative research design for data collection, and data analysis. For data collection, a discourse completion test (DCT) was used as a research tool. The participants of this study were Pakistani ESL learners and British English speakers. Kasper and Dahl (1991;as cited in Saleem et al., 2018) recommended that because participants' responses in cross-cultural and interlanguage pragmatic (ILP) speech-act realization studies seem to cluster around specific subcategories, 30 subjects per undivided sample (p. 16) who respond to a DCT is a sufficient sample to answer most ILP speech-act realization questions (see also Bergman & Kasper, 1993;Kasper et al., 1996). Therefore, the participants of the current study were 120 participants, divided into four different groups; 30 participants in each group: (a) 30 British English speakers (BritE), The participants of the current study were selected through nonrandom purposive convenience sampling procedures. There were both male and female respondents included in the current study. The only criteria for selecting the Pakistani ESL learners from different institutions was that the respondents should be educated (at least up to the bachelor's level and have studied English as a compulsory subject) and were postgraduate students studying in their final years of the course of study. Moreover, the participants of the current study were recruited on the basis of socioecomic status variables. The elite class participants include those aristocrats and "high-society" families with "old money" who have been rich for generations. They live in inclusive neighborhoods, gather at expensive social clubs, and send their children at the finest schools. However, the middle class is often made up of educated people with high incomes, such as managers, business owners, doctors, engineers, and secrataries. While, the lower class is often made up of less educated people with lower incomes, such as workers, small business owners, and teachers. Their learning needs may also remain unmet because they have difficulty in accessing information from professional resources. The Pakistani ESL learners and British English speakers, who took part in the study came from diverse majors, including the Master in Management Sciences, Master in Language and Linguistics, and Master in Computer Sciences. The sample of the present study was relatively heterogeneous because they were in contrast to each other in terms of their cultural, academic experiences, and linguistic behavior (British English speakers and Pakistani ESL learners). Participants Page 7 of 22 Saleem et al. Asian. J. Second. Foreign. Lang. Educ. (2022) 7:8 Research tools Researchers (Tsoumou, 2020;Wardat & Alkhateeb, 2020) suggest that data obtained through the use of DCT, particularly in the key formulas and patterns, are relatively similar to naturally occurring data. In response to situations across different languages, both of them share the same semantic formulas and techniques but vary in their forms, as could be expected. In addition, Al-Zubaidi (2017) states that the most popular method of obtaining large amounts of data from large numbers of participants is to use a DCT. A clear example of this is CCSARP, where the initial project data contained responses in seven languages and five interlanguages to 16 different circumstances (Blum-Kulka & Olshtain, 1989). Every community comprised of 200 informants and almost 40,000 samples of demands and apologies were included in the scenarios. This model is followed in other language studies and in interlanguage studies in which the shared survey was translated into other languages generating large amounts of comparable data. Hence, the current study employed a discourse completion test (DCT) as a research tool (See Appendix A). The DCT included six real-life scenarios which were designed on the basis of social distance contextual variables (Table 1). This questionnaire was comprised 6 real-life situations along with their description that demonstrates a particular social context in which the speaker has to imagine himself/herself and had to fill in the responses in English as they were in real-life settings. After each situation in DCT, a blank space was given in which the participants had to write their responses because it is a type of written questionnaire. Reliability of the instrument The DCT situations were confirmed by three professors from Leeds University, UK, five professors from Lahore University of Management Sciences (LUMS). The professors were requested to respond to the DCT in English, and comment if there were any situations which were inapproperiate or lack clarity in language. All professors suggested some changes which were incorporated before collecting the final data. Further, the instrument was pilot-tested with 5 PESL/E, 5 PESL/M, and 5 PESL/L class speakers (the inter-rater reliability = . 89) as valid and very close to authentic settings. Regarding the time required to fill the DCT, we found that the participants could complete the questionnaire in no more than 15 min. Being a chief guest in an annual prize distribution ceremony = D (Neutral social distance) Page 8 of 22 Saleem et al. Asian. J. Second. Foreign. Lang. Educ. (2022) 7:8 Data collection procedure For data collection from British English speakers, a colleague who was studying in the UK, was asked to administer the data collection instrument. Through telephonic conversation and e-mail, he was instructed by the researchers how to administer the research tools and what was the purpose behind the collection of such type of data. Furthermore, the research tools were e-mailed to the faculty members of University of Leeds, and the University of Manchester, UK, and their email addresses were researched from the university website. On the other hand, within the Pakistani context, the researchers themselves accessed the participants at the proposed institutions to collect data from Elite, Middle, and Lower class respondents. For requesting the participants to take part in the study, a formal consent was sought from all the participants. The participants who consented to participate in the study were asked to complete a DCT in English and further, they were asked to consider all instructions mentioned for the completion of a DCT to give their responses as they would respond in a real-life societal context and try to give responses as natural as possible. Data analysis procedure The data obtained through DCT were analyzed statistically through SPSS software, descriptive statistics were run. In the data analysis procedure, the DCT in respect of congratulation expression was coded in the light of a taxonomy of congratulation schemes offered by Elwood (2004) and modified according to the needs of the study (Table 2). After the coding of the data, descriptive statistics were run to get the frequency and percentage of the congratulation strategies to examine the differences and similarities among British English speakers and Pakistani ESL learners. Coding reliability A second rater (content specialist) coded 20% of the written discourse completion test (WDCT) data from each group to ensure the consistency of the implementation of the coding scheme. A Pakistani English speaker who is a professor and our colleague in University of Central Punjab, Lahore (Ph.D. in Linguistics), University of Illinois, Urbana-Champaign, USA, and MSc Applied Linguistics, University of Edinburgh, UK Table 2 Classification of congratulation strategies (Elwood, 2004) Saleem et al. Asian. J. Second. Foreign. Lang. Educ. (2022) 7:8 with 44 years of ESL experience (served as English language instructor in Finland, UAE, Saudi Arabia, Algeria, and Pakistan) coded the two sets of English data, the Pakistani ESL learners and British English speakers data. A training session with the rater and researchers of the current study was conducted prior to starting the coding to familiarise them with the coding scheme and allow them to practise coding some data to ensure their understanding of the task. A discussion session was held after they had coded the data to analyse findings. The reliability of the interrater was high; most of the interrater inconsistencies were resolved through analysis and discussion of the coding manual definitions. Finding and discussion In this section, the frequency of using the verbal types of responses, which are called congratulations strategies, are presented in the order of the proposed research questions. The first research question of this study was asked: Q1. How do Pakistani ESL learners and British English speakers use the realization strategies of the speech act of congratulating differently? Table 3 shows the frequency and percentages of the 10 types of verbal responses found in the congratulations of the four groups. It is obvious that the "IFID" (congratulations) was the most frequent strategy used by PESL/M and PESL/L groups (28% and 29%) followed by the strategy of "overlapped strategies" (22% and 22%). In contrast, BritE speakers and PESL/E groups employed "the offer of good wish" strategies more often (25% and 25%), and PESL/M and PESL/L class speakers used these strategies with a percentage of (9% each). Likewise, PESL/M and PESL/L speakers, BritE also used "Overlapped" congratulations strategies quite frequently (19%). Although the first illocutionary force indicating device (IFID) type was the most frequent strategy used by the four groups, the difference of frequency use was not much in munbers as both BritE and PESL/E participants used less IFID strategies. In contrast, other two groups, PESL/M and PESL/L groups used more IFID strategies (see Table 3). It was expected to find "congratulations" used more frequently than the other strategies because the events were happy occasions or news. Also, it is usually the first expression to utter when hearing something good to express happiness and share the occasion with others. This result is supported by almost all the studies on congratulation in different languages and cultures that found "congratulations" is the most frequent expression. However, this differs from Makri-Tsilipakou's results (2001) in Greek and Hernández's (2008) in Peninsular Spanish. Makri-Tsilipakou explained that the use of the expression "congratulations" refers to formality or distance in the relationship between the speaker and addressee. Therefore, the use of "well wishes" is more than "congratulations" in Greek. Hernández (2008) found that "congratulations, " which was used only by women, was less common than expressing approval, happiness, and making celebration plans. In the current study, "congratulations" was used most frequently in the event of "a candidate is newly selected as chief minister" (62 times) and the event of "being a chief guest, speaker congratulates the position holders" (58 times). Although there is more than one form for mubarkan "congratulations" in Urdu, mubarkan is the most frequently used one. The other form for mubarkan is Mubarak, and it was not used frequently because it is from Standard Urdu. Therefore, the simplest form was found frequently. Mubarkan was intensified in more than one way, often by using various numbers, such as a thousand, million, billion, and so forth, to intensify its meaning. In addition, it was intensified by repetition, such as by repeating its vowel (a), the expression "mubarkaan" itself, the number itself, or by adding other bigger numbers in the form. These various ways of intensifying "mubarakank" can be a result of the absence of prosodic strategies. Additionally, they emphasize Leech's (2007) point of view that intrinsically courteous speech acts, such as congratulations, need intensification or gradable expressions. Face enhancing acts such as congratulation also need to be hardened and maximized (Kerbrat-Orecchioni, 1997). Although congratulation strategies lack the physical dimension, the social dimension exists by interacting with friends on activities (Derks et al., 2008). Therefore, offering congratulations on emphasizes the fact that the goal of congratulation is not only to express a psychological state but also it has a social goal that is aimed at strengthening social relationship and intimacy (Makri-Tsilipakou, 2001) among individuals or just to satisfy the social expectation (Bach & Harnish, 1979). Results indicated that PESL groups congratulation strategies were influenced by their L1 culture-specific and language-specific semantic formulaic expressions. They were found using the English equivalent of congratulations (Mubakaan) in Urdu language, except PESL/E group participants who showed a progress towards developing pragmatic competence of the target language. However, in this study, the results showed that although "congratulations" was the most frequently used, it was not usually used alone. It was used as a single strategy only (160 times) and mostly in the event of "a candidate is newly selected as chief minister" and the event of "being a chief guest, speaker congratulates the position holders". The comparison of "Overlapped" strategies among the events was conducted based on the top four compound strategies in each event. Therefore, "congratulations" is mostly used with "offer of good wishes". This result refers to the importance of taking into consideration the patterns of polite compound strategies, and focus not only on the polite expressions alone based on their frequency. The compound strategy of "congratulations" with "offer of good wishes" was used mostly in the event of "a secretory introduces the newly elected sports secretory to the participants", and "a Chief Guest at the Annual Prize Distribution ceremony". This compound strategy was followed by the use of "congratulations" with "offer of good wishes" and "Overlapped" congratulations strategies. This pattern of compound strategy was also used primarily in the event of "a family doctor got married who met 5 months ago", "a friend got a job", and "a friend got appointed as a surgeon in a government hospital". "Offer of good wishes, " as the second most frequently used strategy used by the BritE and PESL/E groups was among the other types of responses, is supported by research by Allami and Nekouzadeh (2011) in Persian; Kočovska (2013) in Latin; and Dastjerdi and Nasri (2013) in Persian, American English, and Syrian Arabic. Some studies in Persian, such as Ghaemi and Ebrahimi (2014) found that "offer of good wishes" is the third most frequently used strategy, while other studies, such as García (2010) in Spanish, found it to be a common strategy in general. However, the different results that indicate a preference and frequency are usually affected by many factors, such as the background of participants, relationship, situation, the tool for collecting data, and so forth. Most of the expressions used in the current study in "offer of good wishes" are religious expressions, which are prayers/blessings rather than nonreligious wishing expressions. "Offer of good wishes" was also used primarily in the event of "a secretory introduces the newly elected sports secretory to the participants", and "a Chief Guest at the Annual Prize Distribution ceremony". However, the way of congratulating others in the event of "the birth of a baby, " for example, can vary or differ based on the medium of communication. For instance, Willer (2001) found that different words are used to describe emotions and physical characteristics of the newborn boy or girl in congratulation greeting cards. Unlike BritE speakers, PESL speakers were found using socio-religious expressions in their congratulation strategies indicating cultural differences in their congratulation strategies, and an inclination towards adhering to their L1 cultural norms. This way of congratulating is affected by the different genre in the way of expressing congratulation. It was also observed that PESL users tend to employ more than one prayer/blessing or wishing expression in a comment. The use of prayers/blessings as well-wishing is a result of Islamic principles in Pakistani society. Blessings are used by people who believe in the power of words (Wierzbicka, 1987;Walkinshaw & Oanh, 2014;Wannaruk, 2008;Yuan, 2001;Zhang, 2020) however, this power is believed to come from Allah, not from the words by themselves. The use of prayers in the situations of congratulation also were found by Emery (2000) and Bataineh (2013) in Arabic, and by Ghaemi and Ebrahimi (2014) in Persian because they are influenced by the same religion, Islam. This influence was also observed in the other studies of speech acts by Saudis, such as greetings and leave takings (Hassanain, 1994;Turjoman, 2005), compliment (AlAmro, 2013), thanking (Altalhi, 2014), refusal (Al-Shalawi, 1997, and invitation (Alfalig, 2016). However, the use of religious expression is also used in "Overlapped congratulations" in the current study. Although various expressions were used in overlapped congratulations, the religious expression such as mashaallah/Jazakaallah "as Allah wills/ as Allah wills, Allah blesses" was used more frequently than the other praising expressions. It was also used more frequently in the event of "a family doctor got married who met 5 months ago", "a friend got a job". It is usually used by Muslims to express praise or happiness when someone hears good news or sees something he/she likes. It is believed that Allah protects the good news/object of jealousy and the evil eye (AlAmro, 2013). In the current study, some strategies were used infrequently, and most of those were supported by Elwood (2004) and Allami and Nekouzadeh (2011) who found the strategies used with a low frequency. For instance, "expression of validation" was used only on a few occasions by the four groups, mostly in the event of "getting a new position" and "winning an election. " "A suggestion to celebrate" was used in the current study only on a very limited occasions by the four groups and mostly in the event of "getting a permanent job" In contrast, Al-Hour (2019) found that it is common in Palestinian society. However, it was interesting to find that some of the respondents employed some emojis (though it was not included in the scope of the study), such as party popper, confetti ball, red balloon, and so forth to celebrate the occasion. The strategy of "expression of envy" was used only at certain occasions by the four groups, especially in one of the events, that of "getting a new position" (i.e., The position of "university professor"), However, the use of this strategy was not expected because people usually express their feelings positively and use courtesy in happy events and Page 13 of 22 Saleem et al. Asian. J. Second. Foreign. Lang. Educ. (2022) 7:8 avoid negative comments. Therefore, it was not used frequently in the data because of the nature of responsibilities in the position of "professor" in university. "Expressing of surprise" was used at some occasions such as in the event of "family doctor got married" and "getting a new position" to express surprise, and/or that the occasion was not expected. It was used on a few occasions by both male and a female respondents from the four groups. However, Unceta Gómez (2016) found that expressing surprise as a strategy of congratulation was not used by women in Latin. "Requesting information" was used by both BritE and PESL/E speakers more often than the PESL/M and PESL/L class groups and mostly in the event of "family doctor got married" However, this result was not supported by a number of studies in which it was found that it is one of the most frequently used strategies (Al-Hour, 2019; Dastjerdi & Nasri, 2013;Elwood, 2004;Nasri et al., 2013;Mahzari, 2017). Nevertheless, people feel more comfortable asking questions about the personal news of occasions in face-to-face communication. The second research question of this study asked: 1. How do social distance variable effect the use of congratulation strategies of British English speakers, Pakistani ESL/Elite, PESL/Middle, and PESL/Lower class learners? Results regarding social distance variables show that the respondents of BritE, PESL/E, PESL/M and PESL/L groups used more strategies of IFID (9.4%, 8.3%, 16% and 17%) when interacting with distant level interlocutors. It can be noticed that both PESL/M Page 14 of 22 Saleem et al. Asian. J. Second. Foreign. Lang. Educ. (2022) 7:8 and PESL/L class participants used comparatively more IFID strategies than the other two groups while interacting with distant level interlocutors. Similarly, as can be noticed in the Table 4, the four groups used less congratulations strategies while interacting with close and Neutral level interlocutors. The Request for Information (RFI) strategy was not favoured much by PESL/M class speakers at all. The other three groups (9.4%, 6%, and 4%) were found using some Request for Information (RFI) strategies while interacting with close level interlocutors. As regards Offer of Good Wish (OoGW) congratulation strategy is concerned, Table 4 shows that BritE (16%) and PESL/E (17.2%) groups used more strategies of OoGw when they were interacting with interlocutors of close social distance. In contrast, both PESL/M and PESL/L class participants did not use these strategies more often as can be seen in the Table 4 and Fig. 2. Regarding Expression of Happiness (EOH) congratulations strategies, Table 4 shows that BritE (7%), PESL/E (5%), PESL/M (6.1%) and PESL/L (5%) groups used more strategies of EOH for close social distance interlocutors as compared to other two variables (neutral and distant). Another difference among the four groups can be observed in the use of the Utterance of Encouragement strategies. Both BritE (6.1%) and PESL/E groups used more strategies for close level interlocutors. In contrast, both PESL/M (6.1) and PESL/L (5%) groups were found using UoE strategies more often for distant level interlocutors. Regarding Overlapped Congratulations strategies, the four groups used this strategy with a percentage of (BritE 7.7%), (PESL/E 7%), (PESL/M 14%) and (PESL/L 13%) when interacting with close level interlocutors. Nevertheless, the four groups did not favour the use of EoS, EoV, and SoC congratulation strategies more often, as can be noticed in Table 4 Fig. 2 Congratulation strategies distribution interacting with social distance Page 15 of 22 Saleem et al. Asian. J. Second. Foreign. Lang. Educ. (2022) 7:8 Noticeably, social distance is found to have great effect on congratulations strategies behaviors in all four groups. In general, as scholars argue (Avazpour, 2020;Lect & Abdulkhaleq, 2020;Pearson & Hasler-Barker, 2020;Tereszkiewicz, 2020;Tsuchiya, 2020;Vassilaki & Selimis, 2020;Wardat & Alkhateeb, 2020), the greater the social distance between the speaker and the hearer, the more frequently IFID expressions (direct and/ or indirect) are employed. More specifically, people almost always utter IFID (directly and/or indirectly) when they are using congratulation strategies for strangers as in the following situation "a passenger is sitting beside you became very excited and happy for being appointed as a surgeon in a government hospital", and very often, they use "Congratulation/Heartiest congratulations" (BritE) "Bundle of Congratulations" (PESL/E), "Congrats" (PESL/M), and "Congratulations" (PESL/L) in their expressions. Further, Avazpour (2020) states that IFID (directly and/or indirectly) is also very frequent used with friends but used less with strangers; "Request for Information, and The Offer of Good Wish" expressions are used in some scenarios as in "a friend gets a permanent government job" but these appear to be situation-specific. With intimates, "Expression of Happiness" terms are found in all "intimate interlocutor" situations investigated, and the number of responses with these Expression of Surprise strategies is nearly equal to that of the responses with the Suggestion of Celebration strategies in the events like "a family doctor gets married". The findings are acknowledged by Wardat and Alkhateeb (2020) who argue that when there is interaction with the distant level interlocutors in congratulation scenarios, more politeness is displayed and there are more chances of using IFID, and the Offer of Good Wish strategies than any other strategies. The findings are also consistent with Lect and Abdulkhaleq (2020) who state that social distance determines the choices of congratulations in different social scenarios. The four groups' participants, especially PESL/M and PESL/L tend to use more Overlapped Congratulations strategies with the respondents of close level interlocutors and prefer to use less Overlapped Congratulations strategies with neutral and stranger level respondents. In contrast, BritE and PESL/E speakers tend to use less Overlapped Congratulations strategies and prefer to use more Request for Information and Utterance of Encouragment strategies. These findings illustrate that speaker in these situations tend to be quite interactive and prefer to keep harmonious relation with each other by using "Request for information" and "Encouragment" strategies. Saleem et al. (in press) argue that speakers using these strategies show that they wish to be pretty cordial and amiable with their interlocutors (acquitance and stranger level social distance). Here, the evidence of crosscultural difference is quite clear, as PESL/M respondents and PESL/L class respondents almost tend to use the similar type of congratulations strategies. They are found exactly translating the Urdu expressions into the target culture. Though this transfer is not negative in nature, PESL/M and PESL/L groups lacked pragmatic competence of the target culture, and could not comprehend the situation as the British English and PESL/E speakers did, and they adhered to their native cultural norms. Further, a positive development can be observed as regards PESL/E learners were concerned, unlike PESL/M and PESL/L groups, PESL/E group used almost the similar congratulation strategies as BritE speakers were found using. Nevertheless, the results are in line with (Al-Hour, 2019;Al-Shboul & Huwari, 2016;Avazpour, 2020;Elwood, 2004;Nasri et al., 2013;Tsoumou, 2020) previous studies which argue that ESL learners Page 16 of 22 Saleem et al. Asian. J. Second. Foreign. Lang. Educ. (2022) 7:8 are found quite competent in grammatical competence and are less aware of pragmatic competence. At most of the occasion, especially while interacting with the social distance phenomenon, ESL learners prefer to utilize their cultural-specific responses which are inappropriate and can lead to miscommunication or breakdown with the target culture speakers. Nevertheless, the speakers of this study tend to use more IFID strategies with the strangers in events like "a passenger sitting beside you became very excited and happy for being appointed as a surgeon in a government hospital" than the close friends or colleagues and intimate relations. This might be because speakers often tend to look more caring and cordial with the distant interlocutor and use more positive politeness strategies. Furthermore, it is supported by past studies (Al-Qudah, 2001;Bataineh, 2013;Can, 2011;Lodhi & Akash, 2019;Majoko, 2019;Malmir & Derakhshan, 2020;Martín-Laguna, 2020;Meihami & Khanlarzadeh, 2015;Mohd et al., 2020), who found the use of more detailed strategies with interlocutors of distant level and less with close friends and intimates. Notwithstanding, the findings acknowledge the past studies (Chen, 2020;Dawson, 2020;Ezzaoua, 2020) which argue that crosscultural differences occurred in the production and comprehension of politeness strategies of advanced ESL learners because it is sometimes challenging to perceive and understand social distance scenarios in the target culture. The findings also illustrate the PESL/E participants' progress towards approximation and development ofthe target culture'ss sociopragmatic knowledge, recognizing the judgments of earlier studies in which Dawson (2020) and Saleem et al. (2021c) claim that ESL assessed the social distance of their interlocutors in the same way as the American native speakers and British English speakers, showing ESL learners' development towards the target language sociopragmatic knowledge. Although there is found the negative transfer of sociopragmatic knowledge to the target language in a situation "a friend got a job", yet we can find some development as well. Hence, it may be concluded that unlike PESL/M and PESL/L groups. PESL/E group to some extent approximated the target culture's sociopragmatic knowledge in their production and perception of social distance variable. Conclusion Considering the results, it can be said that the speech act of congratulation is one of the important and frequently used speech acts in everyday communication as suggested by the contexts where the speech act is realized and the strategies. Especially in the case of Pakistani ESL learners, it is found that congratulations are not only frequently used to acknowledge one's success, but they are also exchanged among interlocutors on special days and emotionally loaded occasions such as religious and national days/festivals, birthdays, anniversary and wedding days. Furthermore, this study has revealed that the native speaker conceptualization of the English and Pakistani ESL (except elite class speakers) learners' speech act of congratulation is different considering the contexts of use and the strategies/components. Based on the findings, it is possible to state that English congratulation is more task-oriented, whereas Pakistani congratulation is more social relational (Can, 2011). Specifically, in terms of achievement, English and PESL/E seem to follow an individually oriented achievement motive, while Pakistani (ESL/M and ESL/L) appear to have a socially oriented achievement motive (Ezzaoua, 2020) as the use Page 17 of 22 Saleem et al. Asian. J. Second. Foreign. Lang. Educ. (2022) 7:8 of particular strategies in the contexts of achievement has indicated. In this respect, in Hofstede's terms (2011), the collectivist and feminine aspect of Pakistani culture and the individualistic masculine aspect of British culture seem to be reselected in the realization of the speech act of congratulation as far as the data and the findings of this study are concerned. This study can be considered to have some contributions in the areas of cross-cultural and intercultural communication by focusing on the sociopragmatic aspects of the speech act of congratulation in British English and Pakistani and presenting the cultural knowledge and awareness through congratulation contexts and strategies which will help interlocutors to cope with real life situations. In this way, intercultural communicative competence (Byram, 2012) can be ensured enabling non-native speakers to "survive" in new contexts and interpersonal relations by successfully responding to unfamiliar linguistic, cultural and social factors (Olshtain & Celce-Murcia, 2016). The current study also has some implications in foreign language education, specifically in the area of pragmatic competence and the development of speech acts by providing metapragmatic information about the speech act of congratulation, which lacks evidence in the literature and in teaching materials (Saleem et al., 2020;Taguchi, 2019). Such information or input based on linguistic evidence could be useful for learners of English as a second/foreign language, who can have the chance to develop cultural awareness and communicative competence. Not only non-native speakers but also native speakers will gain awareness with regard to what is appropriate in the realization of the speech act of congratulation in their own speech communities since for non-native speakers such knowledge is often unavailable at a conscious level (Wardat & Alkhateeb, 2020). In addition, material developers or program coordinators can use the input in developing materials for learners of English as a foreign/second language and thus, incorporate it into textbooks or other supplementary classroom materials. The incorporation of cultural and pragmatic information regarding the speech act of congratulation is expected to increase the number of "small C" elements present in textbooks in comparison with the fact-oriented "big C" elements which have been dominating the textbooks and which have been criticized for their inadequacy in developing cultural competence (Taguchi & Roever, 2017). Other than foreign language learners and material developers, teachers can also benefit from the results of this study, especially in terms of explicit metapragmatic instruction and teachers' pedagogical development as well as pragmatic competence.
9,720.2
2022-04-12T00:00:00.000
[ "Linguistics" ]
Hierarchical Optimization of 3D Point Cloud Registration Rigid registration of 3D point clouds is the key technology in robotics and computer vision. Most commonly, the iterative closest point (ICP) and its variants are employed for this task. These methods assume that the closest point is the corresponding point and lead to sensitivity to the outlier and initial pose, while they have poor computational efficiency due to the closest point computation. Most implementations of the ICP algorithm attempt to deal with this issue by modifying correspondence or adding coarse registration. However, this leads to sacrificing the accuracy rate or adding the algorithm complexity. This paper proposes a hierarchical optimization approach that includes improved voxel filter and Multi-Scale Voxelized Generalized-ICP (MVGICP) for 3D point cloud registration. By combining traditional voxel sampling with point density, the outlier filtering and downsample are successfully realized. Through multi-scale iteration and avoiding closest point computation, MVGICP solves the local minimum problem and optimizes the operation efficiency. The experimental results demonstrate that the proposed algorithm is superior to the current algorithms in terms of outlier filtering and registration performance. Introduction Three-dimensional point cloud registration is a fundamental task for many 3D computer vision applications, such as 3D reconstruction [1], 3D recognition [2], and simultaneous localization and mapping (SLAM) [3]. The purpose of registration is to transform a set of point clouds in various views into the same coordinate system, which is optimal for model recovery or pose estimation. ICP [4] and its variants are the most widely used 3D point cloud registration methods due to their simplicity and good performance. However, considering the closest point as the corresponding point may put the ICP method at risk of the local optimum issue, especially when the point cloud has outliers or does not have good initialization. In addition, ICP cannot process a large scale of input clouds because of its high computational complexity. With the advancement of sensor technology, it has become easier and cheaper to obtain 3D point clouds, making point clouds more widespread. However, the scanned point clouds often have a huge scale and outliers, so they need to be processed before registration. The voxel filtering [5] can quickly and uniformly reduce the scale of the point cloud, while it is not good at removing outliers. Methods based on statistics [6,7], radius [8] and point density [9] remove random outliers by calculating the distribution of neighboring points and setting appropriate thresholds. However, they cannot work well when many outliers are evenly distributed. In addition, they spend more time due to the neighborhood point calculation. To solve the local minimum problem of ICP, some algorithms have been proposed. Go-ICP is the first global registration algorithm based on the ICP framework; it uses a branch and bound method to find the global optimal. Although it solves the local minimum problem, it is still sensitive to the initialization. A coarse-to-fine strategy [10,11] is presented, where the coarse registration provides an initial estimation for the fine registration to obtain a precise alignment. This strategy can accurately register point clouds without initialization. However, it consumes more time for two parts of the calculation, especially for the point cloud feature extraction in coarse registration. In this paper, a hierarchical optimized 3D point cloud registration algorithm includes improved voxel filter and Multi-Scale Voxelized GICP (MVGICP) is proposed. We introduce an improved voxel outlier filter, which combines voxel downsampling and point density information, delete grids of which the number of points is less than the threshold, and replace other points with centroids. Compared with the traditional voxel filter and the statistical outlier removal method, the improved voxel filter has the best filtering performance and takes less time than the statistical algorithm. Then we propose Multi-Scale Voxelized GICP (MVGICP) to register the 3D point clouds, which is based on the distribution-to-distribution strategy. MVGICP firstly voxelizes the target point cloud with a certain voxel size and calculates the point cloud distribution in the grid. The point cloud distribution is then brought into the GICP framework to get the transformation, followed by repeating this process with a smaller voxel size until the size is small enough. Thus, MVGICP does not require any coarse registration. Larger voxel size can initially transform the point cloud and the small size can further refine the registration result. MVGICP does not need the closest neighbor search either, so it can significantly improve the computing efficiency. The experimental results show that the proposed algorithm can effectively filter out the outliers and obtain better registration accuracy as well as computation efficiency in several datasets. In general, the contribution of this paper can be summarized as follows: (1) An improved voxel outlier filtering method is proposed. This method combines traditional voxel filters and point density to achieve accurate filtering of outliers while efficiently downsampling. (2) A novel 3D point cloud registration algorithm MVGICP is presented. It uses multi-scale iteration to avoid the complex closest points computation, and significantly improves the situation that GICP is prone to fall into local minimums and low calculation efficiency. At the same time, it has higher registration accuracy than VGICP. (3) A thorough evaluation of the proposed algorithm is presented, including comparisons with the current algorithms in terms of outlier filtering and registration performance. The experimental results show that the proposed method performs better than other algorithms. The remainder of this paper is organized as follows. Section 2 reviews the related work of other researchers. Section 3 demonstrates the details of our proposed method. Section 4 provides the experiment results and analyses. Section 5 concludes with the summary and the perspectives. Point Cloud Filtering Point cloud filtering is a process that takes place before point cloud registration. Since the dense 3D point cloud acquired by terrestrial laser scanners or RGB-D instruments is enormous and contains many outliers, it is necessary to remove outliers and downsample the raw data. Voxel downsampling [5] is the most common method. By dividing the point cloud by grid and replacing other points with the centroid, the voxel method achieves the best efficiency. However, it is not good at removing outliers. Statistical outlier removal [6] in the Point Cloud Library calculates the mean and standard deviation of the distance from each point to its neighborhood and then removes the points for which the distance is outside the set range. Yang et al. [7] added dynamic standard deviation thresholds to the statistical algorithm to solve the irregular density distribution. Pirotti et al. [9] proposed an improved statistical algorithm and local outlier factor algorithm based on K-nearest neighbors (KNN). The local outlier factor relies on the local density with respect to its neighbors. Coarse Registration When the point cloud sets start in an arbitrary initial pose, the local registration algorithms easily fall into a local minimum. Then, registration returns to solving a global problem. The coarse alignment can compute an initial estimate of the rigid motion between two surfaces. The manual method is firstly applied; Zhang et al. [12] proposed manually adding labels on the model before registration and then aligning them through the label feature information. However, this is not suitable for the complex point cloud. Principle component analysis (PCA) [13] is more reliable when the shape of the target point cloud and the source point cloud is the same. Huang et al. [14] integrated the projection strategy with the Fourier signal matching and promoted the performance of noisy and low overlap point clouds. Some studies introduced methods based on the RANSAC iteration. Aiger et al. [15] exploited the invariant property of a four-point congruent set. Theiler et al. [16] addressed a keypoint-based four-point congruent set (K-4PCS) to overcome the low efficiency of 4PCS. In addition, Mellado N et al. [17] greatly enhanced the 4PCS algorithm by smart indexing. Voxel-based 4PCS [18] voxelizes the point cloud and generates plane patches before extracting four-plane congruent sets. V4PCS improves the robustness to unequal point density or point clouds from different sources. Other studies based on the local geometric features of the point cloud are more extensive. Frome et al. [19] proposed 3D shape context (3DSC), which is an extension of 2D shape context. It adopts a feature description method based on shape contour, which uses histograms to describe shape features in a log-polar coordinate system that can reflect the distribution of sampling points on the contour well. Stein et al. [20] introduced a 3D splash descriptor by a local reference frame (LRF) to achieve posing stability. Tombari et al. [21] proposed a spherical coordinate system to divide the neighboring points around the query point and obtain the signature of histograms of orientations shot (SHOT) by counting the number in each subspace. This method balances descriptiveness and time efficiency. Rusu et al. [22] proposed a fast point feature histogram (FPFH) of two-point descriptors for all neighboring points of the reference point. It simplifies the point feature histogram (PFH) descriptor and decreases its computational complexity from O(n 2 ) to O(n). Flint et al. [23] presented 3D-SIFT descriptor by extending the 2D scale-invariant feature transform (SIFT) descriptor to 3D space. Guo et al. [24] proposed a local feature descriptor of rotation projection statistics (RoPS) by performing a series of operations such as rotation, projection, distribution matrix calculation, statistics analysis, and merging on the neighbor points. Yang J et al. [25] presented a local feature statistic histogram (LFSH) which formed a comprehensive description of the local geometry by statistically encoding the local depth, point density, and angles between normals. Chen et al. [26] introduced a descriptor based on plane/line structural features and achieved good performance of artificial point clouds with regular planes. However, it is not good at handling irregular features such as plants. Based on voxelization, Wang et al. [27] proposed a 3D SigVox descriptor, which is the first shape descriptor of complete objects to match repetitive objects in large point clouds. For more details about 3D point cloud coarse registration, readers are referred to the survey in [24]. Fine Registration In contrast with the coarse registration, the fine registration method primarily refers to directly obtaining the correspondence between two original points rather than feature descriptors. It produces a more precise result from the initial transformation. The most well-known fine point cloud registration approach is the iterative closest point (ICP) method from Besl and McKay [4]. This method starts from initial alignment and then alternates between establishing correspondences through the closest point and recalculating the alignment according to the current correspondences. The ICP algorithm achieves positive performance in point cloud registration. However, it takes too much time due to the closest point search, and it easily falls into a local minimum, especially when there are noises. Some studies have focused on solving the local minimum problem. For instance, Chetverikov et al. [28] introduced trimmed-ICP, which is based on the consistent use of the least trimmed squares (LTS) in all phases of the operation to improve the robustness to noise. Yang et al. [29] proposed GO-ICP, which is the first variant of ICP to solve the local minimum. However, GO-ICP is still sensitive to occlusion and partial overlap. In addition, Biber et al. [30] proposed the normal distribution transform (NDT), which is applied to the statistical model of three-dimension points rather than a local feature or closest points. Therefore it does not need to include feature calculation and matching of corresponding points in the ICP process, and NDT runs faster. The contributions of some other studies are improving computational efficiency. Chen et al. [31] took advantage of the tendency of most range data to be locally planar and introduced the point-to-plane variant of ICP. Compared with the ICP algorithm, point-to-plane ICP greatly advances operational efficiency. Khoshelham et al. [32] used closed-form point-to-plane correspondence and improved the computing speed. Segal [33] combined the ICP and "point-to-plane ICP" algorithms into a single probabilistic framework called GICP. The factors that affect the computation efficiency are closest point search and the solution of nonlinear problems. Bouaziz et al. [34] used sparsity inducing norms to make the algorithm efficient. Yang J et al. and Koide [35] extended the GICP with voxelization to avoid the costly nearest neighbor search. Due to the popularity of low-cost point cloud acquisition devices such as Kinect and Realsense, RGB-D point cloud processing has become more crucial. Korn M et al. [36] integrated L*a*b color space information into GICP and presented a method to support point cloud registration with color information. The convolutional neural network (CNN) has a strong ability to learn feature descriptors. Aoki et al. [37] expanded the PointNet and LucasKanade (LK) algorithms into a single trainable recurrent deep neural network and achieved outstanding registration performance. However, it is not good at handling partial-to-partial registration. The deep closest point (DCP) proposed by [38] performs well in solving the local minimum problem of ICP, but it can only handle a single object point cloud with less than five thousand points. Methodology An accurate spatial registration method was designed to align the 3D point clouds of different perspectives and partial overlap into the same coordinate system and make the registration error small enough. Figure 1 shows the flowchart of this proposed method. Firstly, the improved voxel filter was used to remove outliers of the original point cloud. Then, the MVGICP was designed to achieve multi-scale iteration for coarse-fine registration end-to-end. Multiple threads accelerated the entire calculation process. Problem Formulation Point cloud registration can be described as finding the best affine transformation between two point cloud sets. Given a pair of data P and Q, for any point p i ∈ P and q i ∈ Q, the problem can be addressed like this: where R and T are the rotation matrix and transformation vector, respectively. represents outliers and noises of the raw data. The 3D point cloud registration algorithm should be accurate and fast. It should also be robust to noises, arbitrary poses, and other perturbations. In this section, a registration algorithm that satisfies these qualifications is introduced. It consists of two main parts: point cloud filtering and multi-scale registration. Pointcloud Filtering As a benefit from the terrestrial laser scanners (TLS) and low-cost 3D instruments such as Kinect, point clouds can be obtained easily. However, the raw data contain a huge amount of redundant points and noises because of the external interference as well as measurement errors of the collection equipment. Therefore, the original point cloud data need to be effectively filtered and denoised, enhancing the algorithm's stability to noise and reducing point cloud data. More precisely, this can enhance the accuracy and speed of subsequent point cloud processing. Researchers have proposed a variety of methods to downsample and eliminate outliers in the point cloud. The voxel filter has the highest computational efficiency, but it is weak at filtering outliers. The statistical outlier removal method based on point distribution can achieve good filtering results, but the calculation time is longer. To overcome these shortcomings, this paper proposes an improved voxel filtering method, which strikes a balance between the computational efficiency and the outlier filtering. As shown in Figure 2, the input point cloud data are divided by a three-dimensional voxel grid. In detail, the distribution range [(x min , x max ), (y min , y max ), (z min , z max )] of the point cloud in three dimensions is obtained first of all. Then, an appropriate cell size c (Bunny: 0.006; Hippo: 0.02; Chef: 6.) to rasterize the point cloud is selected. The number of grids obtained in X dimension can be described as: Likewise, the number of grids in N y and N z can be obtained respectively and (N x × N y × N z ) cubes can be calculated. If a point cloud P containing N uniformly distributed points, the number of points per cube is: In fact, the density of points in the point cloud is not uniform. As shown in Figure 2, the model has a high point density, and the noise points are low. The model's cube contains a large number of points, while the noise point cube contains fewer points. In addition, dividing on the three coordinate axes will produce a lot of repeated grids, meaning that the real number of grids will be less than (N x × N y × N z ). According to these factors, outliers can be filtered out by setting an appropriate threshold to eliminate cubes with fewer points than the threshold. Moreover, the improved voxel filter is fast and efficient, which greatly improves outlier removal performance only by adding a point density calculation in the traditional voxel filter. The selection of the threshold is the most important step. A too-low threshold will result in incomplete removal of outliers, while a too-high threshold will cause holes in the point cloud. In this paper, the threshold was set as t = 2n. After removing the outliers, P is converted to P which contains N points. Then, the centroid of all points in each voxel grid can be calculated: where b represents the number of remaining grids, and a is the number of points in each grid. Finally, all the points in each grid are replaced by the obtained centroid point, which can reduce the point cloud scale. Mvgicp: Point Cloud Registration In this section, we propose a Multi-Scale Voxelized GICP (MVGICP) method to register point clouds, which is an improved version of GICP and VGICP. Compared with the GICP and VGICP, MVGICP achieves smaller registration errors and faster speeds. The pseudo code for the MVGIGP is given in Algorithm 1, which firstly uses a bigger voxel size (one hundred times the average density of point cloud) to segment the target point cloud and obtain the mean and covariance under the assumption that points within the voxel grid satisfy Gaussian distribution. Then, MVGICP brings the obtained value into the GICP framework to calculate the initial transformation T. Afterward, a step-down voxel grid size (the minimum is six times the average density of the point cloud) is adopted to optimize the final result until the threshold or the maximum number of iterations is reached. for c = c max : c min do 6 Compute V Q through process voxelization; 7 Solve Equation (7) and update T; 8 end 9 return T 10 P←P*T; 11 Vertify whether T aligns P and Q; 12 end 13 Voxelization: 14 for i=1:m do 15 Compute the mean value µ of points in each voxel grid; 16 Compute the mean covariance C of points in each voxel grid; 17 end 18 return V; Basic Concept of Gicp Generalized-ICP (GICP) is a variant of the iterative closest point (ICP) algorithm. It combines the traditional ICP and point-to-plane variant in a probabilistic framework and forms a plane-to-plane matching strategy. As shown in Figure 3, the point-to-point matching of traditional ICP is easily falls into a local minimum. GICP is a distribution-to-distribution strategy. It chooses the covariance matrix of neighboring points instead of each point to calculate the best transformation T, which overcomes the point-to-point method's shortcomings. On the surface from P and Q, a point is sampled as a Gaussian distribution: Due to the similar framework to ICP, the tranformation of GICP can be expressed as: where d i = q i − T p i is the error of alignment point clouds and approaches zero. C P i and C Q i are the covariances of point clouds P and Q , respectively, which are estimated from their adjacent points. GICP increases calculation speed. However, similar to ICP, it requires a good initial pose and closest point searching, which results in a local minimum and huge calculation costs. To further optimize GICP, many improvements have been proposed. Three-Dimensional Normal Distribution Transform (3D-NDT) transforms a discrete 3D point set into a piecewise continuous probability density represented by a normal distribution set and maximizes the probability that one distribution falls into another. The use of the probabilistic-based method avoids the closest point searching and improves calculation efficiency. Nevertheless, the accuracy of NDT is relatively low. VGICP adopts the 3D-NDT strategy and extends GICP with voxelization. However, both VGICP and NDT are sensitive to the voxel size. MVGICP overcomes its shortcomings. It can be initially registered at a large voxel scale, and the small size can refine the registration result. Figure 4 shows the concept of MVGICP; its main idea is to voxelize the point cloud with different voxel sizes. In detail, MVGICP firstly subdivides the space occupied by the point cloud model into small cubes of regular size, and then calculates the mean u and covariance C Q i of the cube containing multiple points. So, the distance between p i to its adjacent points q i within radius r, which can be written as this formula: d i = ∑ jqj − T p i , and Formula (5) can be transferred to: Mvgicp'S Optimization of Local Minimum To improve the computing efficiency, the mean of all neighbor points in the voxel grid is used instead of a single point, and the above objective function becomes: where N i is the number of neighbor points in the voxel grid, which is closely related to the selected voxel size. According to the Gauss-Newton method, the objective function composed of (T, p i , C P i , v.u, v.C, v.N) can be iteratively updated to get the initial transformation. From Formula (7), voxelization can smooth the target point cloud's distribution. Hence, the voxel size is significant. A large size can smooth a larger surface so that the point cloud with larger differences can be aligned. Therefore, MVGICP solves the problem of GICP's high initial position requirements issues. Mvgicp'S Fine Registration For large voxel size, MVGICP can roughly align the point cloud, but its accuracy is low. After obtaining the initial transformation, fine registration is required. As shown in Figure 4b, the number of the points contained in the voxel decrease when reducing the voxel size. This will gradually weaken the target point cloud's smoothing effect, and save more geometric feature information; thus, higher registration accuracy can be obtained. Therefore, for the small voxel size, MVGICP can register the point cloud more accurately. In summary, MVGICP does not require a coarse-to-fine registration strategy. Large-scale MVGICP plays the role of initial registration, and gradually reducing the voxel size will further improve the results. Overall, MVGICP runs faster without the closest neighbor search process. Experiment and Analysis In order to verify the accuracy of geometric registration and computational efficiency of the algorithm in this paper, we designed two sets of experiments. The first set of experiments compares the improved voxel filter's outlier filtering performance and computational efficiency with other filtering algorithms. The second set of experiments compares MVGICP with other methods in terms of the registration performance and time consumption based on different datasets. The registration experiment is conducted based on the open-source PCL. The experiment computer is equipped with Intel i7-6700hq, 2.6 GHz CPU, 16.00 GB of memory, and a 64-bit Ubuntu 18.04 operating system. Outlier Filtering Result To validate the filtering algorithm proposed in this paper, the different ratios of random noise are added to the public Bunny, Chew, and Hippo point cloud models. Traditional voxel grid filter (Voxel) [5], StatisticalOutlierRemoval filter (Sor) [6], KNN-based local outlier factor (KLof) and statistical method (KSor) [9], dynamic standard deviation threshold (DSDT) [7] and our method for removing outliers and downsampling are applied, respectively. The KSor and DSDT improve the statistic methods by KNN and dynamic standard deviation threshold, respectively. KLof is mainly based on the local density of the neighbors, while our approach concerns the points within the voxel. The filtering results are shown in Figure 5. We introduce two evaluation metrics for point cloud denoising. The ground-truth and predicted point cloud are described as , where p i , q i ∈ R 3 . N 1 and N 2 indicates the number of points and they may not be equal, respectively. The metrics are defined as follows: (1) Root-mean-square-error (RMSE): RMSE is the square root of mean squared error (MSE). Compared with mean squared error (MSE), RMSE can reduce the magnitude of error between different algorithms and make it easier to characterize the comparison curve. It calculates the Euclidean distance between the ground-truth and predicted point cloud. (2) Signal-to-noise ratio (SNR): SNR is a common indicator to measure the level of image quality. Usually, a higher SNR means better graphics quality. SNR is measured in dB given as: Random points are generated and added to the original models to form point clouds with different noise ratios. Figure 5 shows the outlier removal results of different algorithms. The Sor, KSor and DSDT are unable to remove the outliers. That is because the statistical-based method relies on the standard deviation, which is sensitive to the outliers of the boundary. KLof and our method have a better performance due to the local density calculation. The results demonstrate that the improved voxel filter can remove the different outlier ratios efficiently. In order to quantitatively evaluate the outlier removal performance of the improved filter, the SNR and the processing time are presented. According to Equation (9), the original point cloud after voxel filtering is regarded as the ground truth, and SNR of the filtered point cloud is calculated. The compared methods perform outlier removal firstly and then use the same voxel size for downsampling. Figure 6 shows the SNR and processing time of different filtering methods. The traditional voxel filter performs worst because it only has the ability of downsampling. The Sor, KSor and DSDT are not good at filtering the point cloud with higher outlier ratios. KLof and our method perform well in most cases; this indicates that the density-based method is more suitable in the uniform distribution of outliers. In Figure 6b, due to the KNN process, the processing time of KSor and KLof is significantly higher than that of other methods. Our algorithm achieves superior efficiency by avoiding the closest point calculation. Figure 6 demonstrates that the improved voxel filter balances computational efficiency and outlier filtering. Figure 6. Evaluation of signal-to-noise ratio (SNR) and processing time on several filtering algorithms. Our approach obtains the highest SNR, and its computational efficiency is much higher than other methods. Another test is proposed to evaluate the effect of filtering on the registration performance quantitatively. The evaluation metric is RMSE. After outlier removal, classical ICP and GICP algorithms are selected to register the two point clouds. Due to the different denoising performance of the algorithm, outliers cannot qualify RMSE to indicate the registration result's quality. Therefore, in this section, the transformation matrix obtained by denoising the point cloud is applied to act on the ground truth, and then the RMSE is calculated. Tables 1 and 2 respectively show the RMSE of filtered point cloud after ICP and GICP registration. Corresponding to the filtering results, our method achieves the best performance among the six filtering algorithms. The experimental results show that outlier removal significantly influences the performance of the point cloud registration. Synthetic Data Registration The MVGICP registration algorithm is tested on simulated data to prove its robustness to perturbation. ICP [4], GICP [33], VGICP [35], and some other coarse-to-fine strategy methods like Super4PCS-ICP (SICP) [17], Super4PCS-GICP (SGICP) [17,33] are tested as comparison methods. ICP obtains the correspondence between two original points directly. GICP is the first ICP variant algorithm that uses the distribution-to-distribution strategy. VGICP and our method extend GICP with fixed-scale voxels and multi-scale voxels. Coarse-to-fine methods use four-congruent point sets to roughly align the point clouds and further refine the result by ICP or GICP. Simulation data include the Bunny and Dragon from the Stanford dataset and T-rex, Chef, Chicken, and parasaurolophus from the UAW dataset [39]. The stability of MVGICP on Gauss noise is tested first. The scanned point cloud may contain some noise points and lead to multiple errors, such as noise, rotation and translation. The noise points are attached to the point cloud rather than free, so it is not easy to filter them out and affect the registration results. Therefore, the stability of the algorithm to noise perturbation has great significance. As shown in Figure 7, Gauss noise is added to Chef, and the standard deviations are 0, 0.5, 1, and 2, respectively. Since the Gaussian noise is related to the point cloud density, the UAW dataset with a relatively similar point density is selected. To further evaluate the algorithms, a comparison of the above algorithms' processing time is also performed. As shown in Table 3, due to the influence of the initial registration, the coarse-to-fine method takes the most time. In addition, since voxelization can smooth the point cloud's surface, VGICP is more effective than ICP and GICP. With multi-scale voxel division and multi-level iterations, our method can quickly obtain a good initial transformation through the large-sized voxel. This appropriate initial transformation can save the convergence time for a small-sized voxel. Based on this, our algorithm has the highest computational efficiency. It can be concluded that MVGICP works well on the point clouds with different noise ratios. The algorithms' performance for registering varying degrees of rotations between a pair of point clouds is also tested. To perform a controlled evaluation, the degree of rotation between any two point clouds is given. The viewing angle differences of the input clouds are from 15 • to 90 • , and its step size is 15 • . Figure 9 shows the results of the RMSEs of several algorithms on different rotation perturbations. The RMSE of GICP increases significantly as the pose difference becomes larger. ICP is more stable than fixed-scale VGICP. That is because the VGICP relies on a certain voxel size, which is susceptible to sub-voxel misalignment. Both our algorithm and the coarse-to-fine method are relatively stable to rotation disturbances. Nevertheless, compared with the coarse-to-fine strategy, our algorithm is more concise and effective because our algorithm does not require initial registration. This confirms that MVGICP can register the point clouds with various rotations, which is important in the 3D reconstruction. Multi-View Registration In this section, the multi-view registration results of our algorithm on the synthetic data and real data are presented. Multi-View Synthetic Data Registration (1) Multi-view registration without outliers: the incremental registration ability of our algorithm on the dataset UAW is evaluated. Multiple scanned point clouds of the Chicken and T-rex without outliers are used as input. Figure 11 shows the multi-view point clouds and the registration results. As shown in Figure 11a,c, each model has 12 scans, and each scan in the point clouds to be registered is remarked by a unique color. The benefits from multi-scale iteration, although these point clouds have different rotations and overlap ratios, include that they are all well aligned. In order to quantitatively analyze the multi-view registration capability of our algorithm, the average registration errors of each object are presented in the Table 4, and are defined as the average difference from the ground truth. ICP is selected for comparison in this part. The table shows that the rotation and translation errors of MVGICP are less than ICP. Since ICP relies on the location of every point, it is easy to fall into local optima. Furthermore, the experimental results demonstrate that MVGICP can accurately register point clouds from multiple views. Table 4. Multi-view registration results on the Chicken and T-rex models of the UWA dataset. Model Chicken T-Rex (2) Multi-view registration with outliers: To verify the whole algorithm proposed in this article, the Armadillo, with outliers, will be tested. Figure 12 shows the input outlier point clouds and their registration results. As shown in Figure 12a, the scans of the Armadillo from different angles are placed together with many outliers. Figure 12b-d shows the registration results in three different views respectively. The registration results indicate that the proposed algorithm filters out the outliers with high accuracy. To evaluate the influence of outlier filtering on the final result, we present the average registration errors of Filtered-MVGICP and MVGICP in Table 5. The rotation and translation errors of MVGICP are nearly 10 times higher than filtered MVGICP. This strongly supports the idea that the proposed method greatly reduces the registration error and shows the stability for outlier 3D point cloud registration. Figure 13 shows the registration results of MVGICP on the real outdoor and indoor scenes. Compared with the model data, the scene point cloud is larger and has more geometric features. There are obvious pose differences between the point clouds to be registered. Furthermore, the results indicate that accurate registration for real scene data is accomplished, and scans are well converged by MVGICP. Conclusions This paper presents an efficient, stable, and accurate hierarchical optimization algorithm for 3D point cloud registration. Our improved voxel filter can remove outliers well but also has good computational efficiency. In addition, MVGICP is effective at finding optimal transformation at a different level of noise and rotation perturbation and effectively handling various types of point cloud, such as model, scene scans, TLS, and RGB-D, etc. The implementation of a multi-threaded operation can achieve accelerated calculation without affecting the quality of the final result. Our algorithm solves the problems raised in the introduction extremely well. It may be broadly applicable in 3D reconstruction, computer vision, and robotics. The proposed method still has some limitations. The algorithm in this paper can be further improved for registering large translation disturbances and low overlap scene data. We will consider adding local features information into the framework to improve handling point clouds in complex scenes.
7,828.8
2020-12-01T00:00:00.000
[ "Computer Science" ]
Strategies in the processing and analysis of continuous gravity record in active volcanic areas : the case of Mt . Vesuvius This research is intended to describe new strategies in the processing and analysis of continuous gravity records collected in active volcanic areas and to assess how permanent gravity stations can improve the geophysical monitoring of a volcano. The experience of 15 years in continuous gravity monitoring on Mt. Vesuvius is discussed. Several geodynamic phenomena can produce temporal gravity changes. An eruption, for instance, is associated with the ascent of magma producing changes in the density distribution at depth, and leading to ground deformation and gravity changes The amplitude of such gravity variations is often quite small, in the order of 10-10 nms, so their detection requires high quality data and a rigorous procedure to isolate from the records those weak gravity signals coming from different sources. Ideally we need gravity signals free of all effects which are not of volcanic origin. Therefore solid Earth tide, ocean and atmospheric loading, instrumental drift or any kind of disturbances other than due to the volcano dynamics have to be removed. The state of the art on the modelling of the solid Earth tide is reviewed. The atmospheric dynamics is one of the main sources precluding the detection of small gravity signals. The most advanced methods to reduce the atmospheric effects on gravity are presented. As the variations of the calibration factors can prevent the repeatability of high-precision measurements, new approaches to model the instrumental response of mechanical gravimeters are proposed too. Moreover, a strategy for an accurate modelling of the instrumental drift and to distinguish it from longterm gravity changes is suggested. Mailing address: Dr. Umberto Riccardi, Dipartimento di Scienze della Terra, Università degli Studi di Napoli «Federico II», Largo S. Marcellino 10, 80138 Napoli, Italy; email<EMAIL_ADDRESS>Vol51,1,2008_DelNegro 16-02-2009 21:27 Pagina 67 Introduction A wide set of dynamic phenomena (i.e.geodynamics, seismicity, volcanic activity) can produce temporal gravity changes, with a spec-trum varying from short (1-10 s) to longer (more than 1 year) periods.An impending eruption, for instance, is generally associated with the ascent of magma producing changes in the density distribution at depth, and leading to ground deformation and gravity changes observed at surface.The amplitude of such gravity variations is often quite small, on the order of 10 −9 -10 -8 g (10-10 2 nms −2 ; 1-10 µGal), so their detection requires high quality data and a rigorous procedure to split up from the records those weak gravity signals coming from different sources.What exactly would Time-Variable Gravity (TVG) tell us about mass redistribution below a volcano?The detected TVG is the sum could be useful to characterize the deformational behaviour in some geodynamic contexts.Several investigations (e.g., Melchior and Ducarme, 1991;Melchior, 1995;Robinson, 1989Robinson, , 1991) ) carried out to date show for instance that a correlation exists at «regional» scale between heat flow and the gravity tide.At very local scale Arnoso et al. (2001) suggest that the tidal response can be strongly influenced by the structure and mechanical properties of the Crust.Those anomalies are associated, respectively, with areas of thin crust, high heat flow values, and recent basaltic-type volcanic activity, and with stable structures that have a deeper Moho discontinuity and lower heat flow.Robinson (1989Robinson ( , 1991) ) relates the correlation found in his studies to features in the upper crust, suggesting a measurable upper crustal tidal response.Arnoso et al. (2001) obtained interesting results from the analysis of the gravity tide collected in two continuous stations in Lanzarote island (Canary islands-Spain).After a suitable reduction of the OTL effect by means of global ocean charts complemented with regional and local ones, they obtained anomalous M2 and O1 delta factors and phases consistent with a body tide effect.These results were interpreted as the response of a porous or cavity-filled, local, upper crust under the influence of tidal strain. Moreover, knowledge of the specific tidal parameters for an area is required to calculate the luni-solar effect, which has to be removed from the gravity record to obtain gravity residuals. As we are interested in modelling the transfer function between the observed gravity and the underground mass redistribution due to volcanic activity, ideally we need residual gravity signals free of all effects which are not of volcanic origin.In fact, natural (mainly body tides), man-made and instrumental sources affect the signal to noise ratio and hide the subtle volcanic signals.Therefore solid Earth tide, ocean and atmospheric loading, instrumental drift, hydrological effects or any kind of disturbances other than due to the volcano dynamics have to be modelled to be reduced in the gravity signal.The atmospheric dynamics is one of the main sources precluding the detection of of the gravitational signals originating from all geophysical sources at work at any given time.Sorting out different geophysical signals in the data is a challenge, but in principle can be facilitated by recognizing the different temporal and spatial characteristics of different geophysical phenomena (e.g., Chao, 1994). Two different approaches may be adopted to extract from the gravity records some insights related with the volcano dynamics, i.e., the analysis of the tidal gravimetric factor (delta: δ) and the analysis of gravity residuals. According to the recommendations of the Working Group on the Theoretical Tidal Model (SSG of the Earth Tide Commission Sec.V of the IAG), the delta factor (δ) is defined as the Earth's transfer function between the body tide signal (∆gn(r)) measured at the station by a gravimeter and the amplitude of the vertical component of the gradient of the external tidal potential (Vn) at the station. where r is the radius of the Earth and , ,are volume Love numbers of degree n (complex value), which characterize the spherical elasticity of the Earth.Thus delta factor is the ratio between the observed gravity tide and the luni-solar gravitational attraction.As it defines the Earth transfer function of the external tidal potential, the delta factor is frequency-dependent and is related to the elastic property of the Earth.Because of the viscoelastic behaviour of the Earth, its reaction to the external perturbation due to the luni-solar gravitational attraction is characterized by a certain phase shift.So the study of the tidal parameters (delta factor and phase) for the main tidal waves and eventually their time evolution small amplitude gravity signals.Pressure changes can reach several tens of hecto-Pascal (say 50 hPa) in specific locations, so the amplitude of the atmospheric contribution to gravity is as large as 200 nms -2 , then it could be higher than volcanic signal.This is why a large part of this paper have been devoted to illustrating the most advanced developments in that field of research and the experience of the authors is presented. The goal of this paper is to describe new strategies in the processing and analyses of continuous gravity record collected in active volcanic areas.The experience of about 15 years at Mt. Vesuvius (Southern Italy) is reported.The time dependent behaviour of the tidal gravimetric factors is compared with the results from relative and absolute gravity surveys and seismic activity.The results are interpreted in the framework of the present-day dynamics of Mt.Vesuvius. Mt. Vesuvius is a quiescent volcano whose last eruption occurred in March 1944.Currently, its activity consists of a low level of seismicity, sometimes increasing in numbers of quakes and energy (hereafter called seismic crises), small ground deformation, gravity changes and moderate gas emission. The Mt. Vesuvius permanent gravity station The Mt. Vesuvius recording gravity station (fig. 1) is located at the Osservatorio Vesuviano (fig.1), where a recording gravity station has been operating since 1987 (Berrino et al., 1993b) and where a first experiment of continuous gravity measurements dates back to 1960s (Imbò et al., 1964(Imbò et al., , 1965a)).The permanent station is assembled on a concrete pillar located in an artificial cave, 20 m deep (ϕ: 40.828N, λ: 14.408E; h: 608 m) (Berrino et al., 1997), where the daily temperature variations are about 0.1°C and the annual ones are within 2°C.The gravity sensor is the LaCoste and Romberg model D, number 126 (LR-D126), equipped with a feedback system (van Ruymbeke, 1991), with a range equivalent to 3⋅10 4 nm/s 2 (implemented at the ROB, Royal Obser-vatory of Belgium in Brussels and upgraded in 1994).The data acquisition is provided by DAS or mDAS systems developed at the ROB (van Ruymbeke et al., 1995) at a sampling rate of 1 data/min (0.01667 Hz).Here we focus on the results of gravity records since 1994 (fig.2), when the instrument and siting of the station were improved.The station belongs to a relative gravity network, spanning the Vesuvian area, periodically surveyed since 1982.It is close to an absolute gravity station established on the volcano in 1986.The absolute value of g was measured in 1994, 1996, 1998and 2003(Berrino, 1995;;Berrino, 2000). In order to check the reliability of the gravity signals, the instrumentation is periodically calibrated and the background noise level at the station is analyzed.In fact, instrumental sensitivity can change, not always linearly, as a consequence of mechanical perturbations and the noise level at the gravity station.To characterize the background noise level, which could affect the instrumental response, the 1 min sampled residual gravity was analyzed to detect any possible seasonal dependence or the presence of spectral components which could hide or mask geophysical signals.Several time windows lasting about 1 week were selected in each season.The amplitude and spectral content of the noise (Berrino and Riccardi, 2004) show a flat trend in the analyzed spectral band (fig.3), according to the standard New Low Noise Model [NLNM] (Peterson, 1993).The high noise level during the autumn is a consequence of the meteorological condition (mainly wind) at Mt. Vesuvius during that season. Changes through time of the calibration factors for different kinds of mechanical gravimeters have been detected by several authors (e.g., Bonvalot et al., 1998;Budetta and Carbone, 1997;Riccardi et al., 2002).However, a complete understanding of the physical processes affecting the instrumental sensitivity is still far from being achieved.As changes in instrumental sensitivity can prevent the repeatability of measurements and affect the phase and ampli-tude of the recorded gravity signals, the accurate calibration of gravimeters plays a key role in high precision gravity measurements (Riccardi et al., 2002).The calibration of a gravimeter at an accuracy level of 10 −8 to 10 −9 g, is difficult to attain because of the many problems in pursuing a known gravity change («standard») at such a level of accuracy.The stability of the calibration factors of LR-D126 has been periodically investigated on site.This kind of calibration is obtained by inducing changes in the spring length through a known «dial» turning and fitting this, by least-squares, against the instrumental output.This is the most frequently Table I.Comparison between LR-D126 and superconducting SG-TT70-T015 meters: results (delta factor and phase) of the tidal analysis for the main tidal waves.In the last column the ratio (SG/D) of delta factors obtained by the records from the superconducting and D meters is listed.The tidal waves nomenclature is: O1-Diurnal lunar; P1-Diurnal lunar; K1-Diurnal luni-solar; S1-Diurnal solar; M2 Semi-diurnal lunar; S2 semi-diurnal solar. LR-D126 SG-TT70-T015 adopted calibration procedure for continuous gravity station equipped with relative mechanical instruments.Various schemes of dial turning have been tested at Mt. Vesuvius station (Riccardi et al., 2002).Hereafter, such calibrations are referred to as «on-site» calibrations.Moreover, two additional calibrations of the feedback were carried out in June 1994 and November 1997 in Sèvres, at the Bureau International des Poids et Mesures (BIPM) during the International Comparison of Absolute Gravimeters (Becker et al., 1995(Becker et al., , 2000)).A calibration of the instrumentation was also obtained in 1997 by means of a joint intercomparison with the superconducting gravimeter SG-TT70-T015 (table I) and the absolute FG5-206 gravimeter (Riccardi et al., 2002).As a consequence, the tidal analysis on the 1998-2000 data furnished a sharp decrease of the δ factors at the beginning of 1999 (Berrino and Riccardi, 2001).A theoretical value of the instrumental sensitivity was computed and compared with the calibration factors monitored «on site» to evaluate whether the calibration factor truly reflects changes in the instrumental response or is merely due to the adopted «on site» calibration procedures.The theoretical instrumental sensitivity was determined by a regression analysis between the meter's output signal and the synthetic gravity tide.Thus, a set of weekly theoretical values of calibration factor was obtained and compared with the results from the repeated calibrations (fig.5).A good agreement between the temporal evolution of the theoretical factors and those obtained through the «on site» calibration was detected (Riccardi et al., 2002).Moreover, the set of the «on site» and theoretical calibration factors has been plotted against the time occurrence of certain large worldwide earthquakes (ML>5) that shook the meter strong enough to send it out of range (fig.5).A time correlation between the larger changes of instrumental sensitivity and the occurrence of seismic events can be observed.More detailed discussion concerning the instrumental sensitivity changes on the occasion of large earthquakes is given in Fig. 5. On-site and theoretical calibration for LR-D126 against the occurrence of some large earthquakes (ML >5); «shot» on vertical axis is the arbitrary unit for the feedback frequency output.Riccardi et al. (2002) and Berrino and Riccardi (2004).They suggest a mechanical perturbation of the sensor, due to some dominant frequencies of the noise at the station on the occasion of large earthquakes.In fact the higher frequency of the seismic free oscillation excited by large earthquakes includes the fundamental mode of oscillation (T0: 15 to 20 s) of the La-Coste and Romberg spring gravimeters (Torge, 1989).These instrumental disturbances due to large earthquakes can last several weeks. Ocean loading and atmospheric reduction The state of the art of the modelling of the ocean loading effect on local gravity data is hereafter reviewed and the most advanced methods in pursuing the reductions of the atmospheric effects are presented.To focus on the methodological upgrades, we present our attempt to model the barometric «local» effect on gravity data in a stable non-volcanic area by means of a barometric array.However, by the way of the methodological approaches, these results can be fruitfully applied to fix the problem of air pressure effect on local TVG in volcanic areas. In a very general sense, Ocean Tide Loading (OTL) is the deformation of the Earth due to the weight of the ocean tides.The ocean tides induces water mass redistributions causing periodic loading of the ocean bottom.The Earth's deformation (vertical and horizontal displacement, TVG, tilt and strain) under this load is called ocean tide loading.The ocean tides as well as the body tides have more than one periodicity, so they can be described as the sum of several harmonic components having their own period.Problem areas are mostly islands and shallow seas with large tidal amplitude and fast varying phase lag.These include among the others in Europe: the Mediterranean Sea and the North Sea. To compute the ocean tide loading the ocean tides are integrated with a weighting function G Here L is the loading phenomenon (displacement, gravity, tilt or strain) at the station located at distance r.The ocean tide at r' is given in its complex form Z=Ae iϕ , where ϕ is the phase; ρ is the mean density of sea water and G is Green's function for the distance |r−rl|.The integral is taken over all global water masses A. 1972).The next step is to replace the convolution integral by a summation.Most ocean models are given on a 0.5°by 0.5°grid, which justifies direct summation over these ocean grid cells if the station is more than 10 km from the coast.Otherwise some re-gridding is necessary.Some local solutions (models) are obtained by means of a re-gridding the model gradually towards the station. Table II lists the amplitude (L) and phase (ϕ) of the main 11 harmonics of the OTL computed for Mt.Vesuvius station.These 11 harmonics are the largest in amplitude and represent most of the total tidal signal.These have been computed by means of a free OTL provider developed at the Onsala Space Observatory and maintained by M.S. Bos and H.-G. Scherneck (http://www.oso.chalmers.se/~loading/).Solutions coming from classical (SCW80) (Schwiderski, 1980) and most recent models (CSR 3.0; FES95.2,TPXO6.2,TPXO7.0)(Eanes, 1994;Le Provost et al., 1998;Egbert and Erofeeva, 2002) have been obtained.If the stations are close to the coast, like the Mt.Vesuvius one, an automatic interpolation is applied using a mask having a coastline resolution of 0.6 km.Schwiderski's model (SCW80) is one of the oldest and it has been considered the standard for many years.It is a hydrodynamic model, given on a 1°by 1°grid and uses an interpolation scheme to fit the tide gauges; SCW80 model does not account for the Mediterranean sea tide.FES95.2 is an upgrade of the FES94.1 model, a pure hydrodynamic tide model tuned to fit tide gauges globally, which includes the Mediterranean Sea tide.In FES95.2 the tides in the Arctic were improved and TOPEX/Poseidon satellite altimeter data has been used to adjust the long wavelength behaviour of FES94.1.It has been calculated on a finite element grid with very fine resolution near the coast.The version used at Mt. Vesuvius station is given on a 0.5°by 0.5°grid.The CSR3.0 models are nothing other than a long wavelength adjustment of FES94.1 model by using TOPEX/Poseidon data and are given on a 0.5°by 0.5°grid.TPXO.6.2 and 7.0 have been computed using inverse theory using tide gauge and TOPEX/Poseidon data.These models have a resolution of a 0.25°by 0.25°grid. The results lead to a maximum OTL effect at Mt. Vesuvius of about 10 nm/s 2 (1 mGal) with a slightly lower amplitude obtained by means of models accounting for Mediterranean Sea tides. The amplitude and phase of such effect has to be accounted for to avoid a tidal modulation (diurnal and semi-diurnal) in the residual gravity signals. Besides solid Earth and ocean tides, atmospheric pressure variations are one of the major sources of surface gravity perturbations preventing a highly accurate detection of small amplitude gravity signals (see e.g., Hinderer andCrossley, 2000, 2004).The continual redistribution of air mass in the Earth's atmosphere causes periodic variations in local gravity at the solar tidal frequencies as well as random variations (Warburton and Goodkind, 1977;Spratt, 1982).Gravity (measured positive down) and local atmospheric pressure correlate with an admittance of about −3.0÷−3.5 nms −2 /hPa (e.g., Warburton and Goodkind, 1977;Müller and Zürn, 1983;Merriam, 1992).Knowing that pressure changes can reach 50 hPa in specific locations, the amplitude of atmospheric contribution to gravity may be as large as 200 nms −2 , which is typically only 10 times less than the solid Earth tides.Moreover, because this effect varies both in time and with frequency (Richter et al., 1995), the contribution is spread over a wide spectral domain and may inhibit the observation of small signals of non-tidal origin.The global atmosphere acts on surface gravity through two competing effects: a direct «Newtonian» attraction by air masses and an elastic contribution due to the Earth's surface loading.The amplitude and polarity of these two effects vary with distance, so the net contribution of the atmosphere coming from different distances from the gravity station is variable (e.g., Spratt 1982;Merriam, 1992;Mukai et al. 1995;Boy et al. 1998).The coherence scale of pressure fluctuations and some considerations on the hydrostatic approximation of the atmosphere, led some authors to suggest a division of the globe into «local» (within 50 km), «regional» (50-1000 km) and «global» zones (> 1000 km) (Atkinson, 1981;Merriam, 1992).In the local zone (<50 km) pressure can change rapidly in time, but is spatially coherent, so that pressure observations collected at the gravity site are sufficient to obtain an accurate reduction within a few tenths of nm/s 2 except when a front is passing through the local zone (Rabbel and Zschau, 1985).When a pressure front moves through or larger horizontal gradients affect the local zone, the band from 1 to 10 km from the gravity station becomes a critical area for which more detailed pressure data are needed. Atmospheric effects on gravity are routinely reduced using a barometric admittance, which is a simple transfer function adjusted by least square fitting between pressure and gravity, both measured locally.The use of a single scalar admittance has been well established (e.g., Warburton and Goodkind, 1977;Crossley et al. 1995).When atmospheric pressure p (hPa) is recorded jointly with gravity g (nms −2 ) at a single station, the gravity can be reduced (gr) by using the relation where pn is a reference pressure at the station and α is either a nominal value of −3.0 nms -2 /hPa or determined by a least squares fit of p to g. The effectiveness and simplicity of this method has led to its widespread use in gravity studies for many purposes.This reduction typically accounts for some 90% of the total atmospheric effect.The drawback of this method is that the admittance shows some variation with time (e.g., Richter, 1987;Van Dam and Francis, 1998), usually on seasonal time scales, whereas the atmosphere is certainly variable on short time scales and local weather systems can move rapidly over a station in a few hours (Müller and Zürn, 1983;Rabbel and Zschau, 1985).So there is no guarantee that the correlation implied by eq.(3.2) is satisfied over all length and time scales.Furthermore Crossley et al. (2002) found that the admittance is sensitive to the time averaging windows applied on data, namely a higher admittance is found for shorter windowing.Moreover, the simple reduction using only the local pressure measurements cannot take into account either the global scale or re- gional (1000 km around the gravimeter) atmospheric effects. Several approaches using the local pressure more effectively have been attempted, particularly with a frequency dependent admittance (e.g., Warburton and Goodkind, 1977;Crossley et al. 1995;Neumeyer, 1995;Neumeyer et al., 1998;Kroner and Jentzsch, 1998;Van Dam and Francis, 1998).The method represents a transformation of the eq.(3.2) from the time domain to the frequency (ω) domain and allowing the admittance (α) to be frequency dependent Minimising ⎮Gr(w)⎮ 2 over the whole frequency range leads to (3.4) which is equivalent to the complex admittance defined by Warburton and Goodkind (1977).Crossley et al. (1995) demonstrated that the complex admittance, as expressed in eq.(3.4), is a powerful and versatile tool to model both local atmospheric effect and contribution due to the solar harmonics Sn and allows to select the frequency ranges of the air pressure reduction.et al. (2007) investigated the efficiency of a barometric array (fig.6) to improve the reduction of the «local» atmospheric effects on gravity data in normal weather conditions and also under extreme weather conditions.This research has been developed by using Superconducting Gravity (SG) data collected in Strasbourg and barometric records in five sites around the SG station at distances ranging between 10 and 60 km (fig.6).Six months of gravity and air pressure records (fig.7) have been analyzed both in the time and frequency domains.Some further analyses have been addressed on three time intervals (highlighted in fig.7) characterized by large and fast air pressure changes.The MLR approach allows us to jointly account for the atmospheric effect as it can be probed trough the barometric array.A similar methodology has also been applied by Dittfeld (1995) and Kroner and Jentzsch (1998). Riccardi The results obtained on the whole data set (6 months) demonstrated that negligible improvement in the «local» atmospheric reduction derives from the use of an array, as also shown by Dittfeld (1995).Moreover the short length of our study did not permit us to investigate seasonal variations in the pressure reduction to gravity. We further compared the efficiency of the atmospheric reduction by means of the barometric array with the one performed through global loading computation (Boy et al., 2002;Petrov and Boy, 2004).We use ECMWF (European Centre for Medium-range Weather Forecasts) surface pressure fields from the 4Dvar model as they have the highest temporal (3 h) and spatial (0.5°) resolution.A more detailed description of the global loading computation is given in Riccardi et al. (2007) and Petrov and Boy (2004).The results showed that gravity residuals reduced by means of the atmospheric loading using the 4Dvar surface pressure are slightly worse than those obtained by applying a single admittance coefficient.This is likely due to the low resolution of the loading model, which is unable to reconstruct high frequency local barometric changes. These considerations could be extended when the mean atmospheric conditions are not far from the hydrostatic equilibrium.So we decided to analyze data during shorter time intervals, mainly characterized by abnormal weather conditions like when an atmospheric front is passing.Here the results for 11 days during December 2000 are shown (figs. 7 and 8). In order to improve the time and spatial resolution of the computed global atmospheric loading, we tested a hybrid method which consists in the following steps: -a global loading reduction using the 3 h and 0.5°×0.5°gridedsurface pressure fields everywhere except in the local zone.The global contribution has been recomputed from the 4Dvar model and oversampled to 60 s with spline interpolating functions; -a local loading reduction obtained by dividing the local zone into a smaller grid using interpolated data and the 60 s pressure samples from the closest stations of the barometric array, except the central zone; -a central zone reduction using the J9 station pressure. All three contributions have been added leading to a hybrid time series of the atmospheric loading.The effect of the hybrid model in the load computation is clear, the increase in the model resolution has led to improve the efficiency of the global loading reduction giving results similar to those obtained through the barometric array (fig.8c,e).To evaluate the potentiality of each aforesaid method of reduction Riccardi et al. (2007) considered the standard deviation (σ) of the reduced gravity residuals as an estimator of the efficiency of the applied air pressure reduction.The standard deviations of gravity residuals reduced by means of the barometric array have been better than to those reduced by using an air pressure record collected in a single station.The use of the barometer array lowers the standard deviation of the gravity residuals by about 30%.The improvement is essentially due to a removal of an almost quadratic background trend by using the array (see fig. 8a-c).The trend could be related to some coherent features of the barometric field at local scale sensed by the array.Moreover an improvement in the atmospheric reduction has been achieved with a frequency dependent admittance (fig.8d); as demonstrated by several authors (Crossley et al., 1995), the reduction is significantly better mainly at high frequency (>2 cpd), because large-scale pressure fluctuations are less correlated with gravity than are local pressure fluctuations.In only one of the 3 periods of rapid changes we investigated did the array improve on standard methods, and even in that case the improvement was noticeable only in the low frequencies, but actually worse at high frequencies.The spectra of the unreduced and reduced gravity residuals according to all the reduction methods are also drawn (fig.8f).They clearly show that a single admittance coefficient is enough to reduce the energy in all the spectral bands.Data from the barometer network improve the reduction at low frequen-cies (<2.5 cpd) while at higher frequencies the results are worse.In fact, comparing the gravity residuals reduced by a single admittance coefficient with those reduced through data from the barometric array (fig.8a,c), an increase in high-frequency (>3 cpd) noise is quite evident.These features could be due to the summation of correlated high-frequency noise in the pressure data series.Hence the use of pressure data acquired by an array to improve the gravity reduction requires special care, because they could introduce an artificial high-frequency noise.The application of the hybrid method improves the air pressure reduction in almost the entire spectral band except for the 2.0 cpd band (fig.8e).This would be the result of two model defaults i.e. an inefficient tidal fitting in the residual gravity computation and an inadequate modelling of the air loading due to the thermal S2 component (see Ponte and Ray, 2002). Finally we note that during normal atmospheric conditions, when the atmosphere is in apparent hydrostatic equilibrium, the use of our local array of barometers gave no improvement over the use of the pressure at the station itself.Accounting for the geometry of the available barometric array and the typical amplitude (10 −2 hPa) of the pressure signal in the «local» zone, we could expect some improvement with a more dense array of higher quality sensors using the methods described in this paper. A more general consideration arises from this experience: the highest level of development in air pressure reduction of local gravity data make sense only for gravity signals collected by superconducting gravimeters.Otherwise a single barometer can be enough to account for the main part of the pressure effect originating in the local zone.However it is noteworthy that a significant progress in modelling atmospheric effects, as demonstrated by several authors, can be pursued by using a frequency dependent admittance, which allows us to model the weather contribution at different frequency ranges.The reduction is significantly improved mainly at high frequency (>2 cpd) and consequently the reduced gravity residuals are much smoother than the others obtained by applying different kind of reductions. Analysis of gravity record at Mt. Vesuvius and results This section reports the most remarkable results coming from the experience of about 15 years of gravity recording at Mt. Vesuvius (Southern Italy).The time dependent behaviour Fig. 9. Calibration functions (dotted and continuous lines) interpolating the factors obtained with the on-site calibrations (points with error bars) and polynomial fitting (dashed line).The thickest line is the calibration function adopted to convert gravity records in nm/s 2 . of the tidal gravimetric factors is compared with the results from relative and absolute gravity surveys and seismic activity.The results are interpreted in the framework of the present-day dynamics of Mt.Vesuvius. To reduce the instrumental effect on tidal parameters (δ factors and phases) computed at Mt. Vesvius gravity station, a calibration function has been computed to convert the recorded signal into nm/s 2 (fig.9); this function derives from the available data-set of the calibration factors periodically checked at the gravity station.In detail, two calibration functions have been computed respectively by including or excluding the highest outlier of the calibration factors data-set obtained in 1999. The harmonic tidal analyses were repeated on the gravity record calibrated by means of the two functions to rule out any dependence of δ factors and phases from instrumental effects, namely the temporal changes of the calibration factors.Thus, taking into account these results, the first calibration function (dotted line in fig.9) was rejected and we adopted the second function (bold continuous line in fig.9) to calibrate the gravity record (fig.2) spanning 1999-2001 interval. All of the gravity records were analysed to obtain tidal parameters and gravity residuals (fig.2).The latter have been computed by subtracting the luni-solar effect (body tide), according to Tamura's gravity potential catalogue (Tamura, 1987) from the gravity record, as well as a first order correction for the atmospheric effect and instrumental drift.The mean coefficient, −3.5 nms −2 /hPa, has been adopted to reduce the atmospheric effect in gravity record (Berrino et al., 1997(Berrino et al., , 2000)).The Wahr-Dehant-Zschau (WDZ) Earth model (Wahr, 1981;Dehant, 1987;Zschau and Wang, 1987) has been adopted to compute tidal parameters, while for the computation of gravity residuals a synthetic tide was calculated using tidal parameters computed from the local gravity records since 1994.As regards the reduction of the drift, accurate modelling is necessary to remove the instrumental drift and to distinguish it from longterm gravity changes due to volcanic sources.This is mainly required in quiescent volcanic areas, where «slow» and small temporal gravity changes are expected.Otherwise in the case of large short-lasting (few hours or days) gravity variations, as observed in open-conduit volcanoes (Carbone et al., 2006), the instrumental drift can be easily modelled. Here, drift has been constrained by taking into account the temporal gravity changes obtained by both relative and absolute measurements periodically performed at Mt. Vesuvius.The latter show a negligible contribution on the trend observable in fig.2b; thus, the long term component of the gravity record can be considered instrumental drift.The drift corrected gravity residuals are shown in fig.2b. The data set has been analysed by means of an algorithm for tidal analysis: «ETERNA 3.3» (Wenzel, 1996).The results for the main tidal waves are summarized in table III.The analyses have been performed on the gravity record rearranged in some temporal subsets to check the time stability of the solutions and investigate the temporal changes of δ factors with a better resolution.The results of these tidal analyses have also been compared with the previous ones from 1987-1991and the 1960 (table III, fig. 10c (table III, fig. 10c).1960s, 1987-91 and 1994-2000 (for tidal waves nomenclature refer to table I). Wave 1959-19611961-19651987-19911994-19981999-2000Imbò et al. (1965a) Imbò et al. (1965a) Although a calibration function has been adopted, aimed at eliminating or at least reducing the instrumental effects, the results of the tidal analyses (table III) show an increase in the δ factor in the period 1999-2000.Anyway the assumption of a calibration function and all the efforts aimed at achieve a rigorous calibration of the LCR D126 do not rule out bona fide some instrumental effects on the observed delta temporal changes at Mt. Vesuvius gravity station. It is noteworthy that these variations are well correlated with some changes in the activ-ity of Mt.Vesuvius.In fact a seismic crisis began in October 1999 (Iannaccone et al., 2001) and a significant inversion of the trend of the gravity changes occurred in 1994 as deduced by both relative and absolute measurements. In order to better understand the relationship, if any, between the results of tidal analyses and volcano dynamics, δ factor and gravity changes have been reconstructed by the available data for the last forty years and interpreted in the context of the activity of Mt.Vesuvius (fig.10a-c).shows the observed TVG at the Osservatorio Vesuviano station.The reliability of this gravity change may be strongly constrained by taking into account data from others stations of the Mt.Vesuvius relative gravity network.As an example, fig.10d shows the gravity changes at Torre del Greco, about 5 km SW of the Mt.Vesuvius crater, and the vertical ground movement continuously obtained by tide gauge data.Tide gauge data were collected very close to the Torre del Greco gravity station.An inversion of the trend, detected in 1993-1994, is also evident in the ground movement.A high degree of similarity in the changes observed at both stations is clear. The results of these tidal analyses have also been compared with the previous ones from 1987-1991 and the 1960s (table III, fig.10c).An increasing trend from 1961 to the present in the amplitude of the tidal waves is clearly detectable.Taking into account the logistic and instrumental differences between the 1959-1965 (Askania meter Gs9, Gs11) and 1987-2001 recording stations, a rough comparison among the different data can be made.From 1961From -1965From to 1987From -1991, changes in the tidal parameters cannot be considered significant, while an increase can be noted from 1991 to 1994 and, as previously discussed, in 1999-2000.The latter shows tidal parameters similar to the values determined in the 1959-61 time interval. Focusing on the most recent data (fig.10), it is interesting to note that the increase of the d factor from 1991 to 1994 (fig.10c) occurred during or soon after the 1989-91 seismic crisis.A gravity decrease of about 60 µGal (fig.10d) (Berrino et al., 1993a) was also detected between 1989 and 1991 at the Osservatorio Vesuviano gravity station by both relative and absolute gravity measurements. The tidal response of the investigated area could indicate a variation in the deformational behaviour probably due to the change in the mean mechanical properties of Mt.Vesuvius, as already suggested by Berrino et al. (1997). Up to now any additional information on volcanic sources may be inferred by the gravity residuals.Although their time distribution clearly shows an increase in amplitude and scattering during 1998-2001, coinciding with the increasing seismicity, there are not enough clear gravity signals to detect or hypothesize the presence of volcanic input. Conclusions The above described results show how the continuous gravity record on active volcanoes could be a useful investigative tool to detect volcanic inputs, but much care must be taken to remove from the recorded signals the effects due to the instrumental response and non-volcanic sources.As changes in instrument sensitivity can reduce the repeatability of measurements and affect the phase and amplitude of recorded gravity signals, the accurate calibration of gravimeters in high-precision gravimetry is topical.The stability of the calibration factors has to be deeply investigated through different calibration methods (e.g., inter-comparison with AG and SG). Concerning the modelling of the non-volcanic contribution to TVG, currently the tide generating potential is at a suitable level of ac-curacy (1 nm/s 2 ), so highly accurate catalogues of tide potential are available.Even the OTL modelling is highly accurate.The TOPEX/Poseidon satellite altimeter data deeply improved the studies on the OTL effects.Nevertheless some local solutions are needed for instance for volcanic islands. Moreover the state of the art demonstrates that the gravity record could be able to characterize the deformational behaviour of the volcano through the time evolution of the δ factor.However, as mentioned by several authors (e.g., Arnoso et al., 2001;Robinson, 1991;Melchior, 1995), this points out the existing necessity of theoretical studies and observations of the highest quality to answer the different questions regarding the significance of the tidal gravity anomaly and how it relates to mechanical properties of the upper crust.On the other hand, the capability of gravity residuals at least in the volcanic area characterized by a low level dynamics, requires a significant improvement in modelling mainly instrumental drift.However, the joint application of relative, absolute and continuous gravimetry is strongly recommended, to better remove the longterm instrumental drift.Thus the recognition of real gravity changes from apparent ones, due to instrumental behaviour, becomes more reliable.Currently, no additional information on volcanic sources may be inferred from the gravity residuals at Mt. Vesuvius permanent station.An improvement in the study of the mass redistribution due to volcanic processes by means of gravity residuals could derive from the use of at least a reference station outside the volcano, which would make it possible to model and exclude long-term and non-volcanic «regional» effects (Berrino et al., 1997). In quiescent volcanic areas, undertaken by «slow» and small temporal gravity changes, it is hard to recover signals (residual gravity) related to volcanic activity by means of mechanical gravimeters.This is mainly due to the strong and non-linear instrumental drift affecting the signals acquired by such sensors.The availability of SGs in active volcanic areas is hoped for because of their very small instrumental drift, high and stable sensitivity.SGs would allow to detect very slow and small TVGs often related with re-filling process of magma chamber. The reduction of the air pressure effects on local gravity through the most advanced methods, such as global loading computation or array application, is redundant for TVG collected by means of mechanical gravimeters. Fig. 1 . Fig. 1.Location of the recording gravity station and gravity network on Mt.Vesuvius. Fig. 3 . Fig. 3. Power spectra of the background noise level computed in each seasons at the Mt.Vesuvius gravity station, with the indication of the standard New Low Noise Model (NLNM) as reference. Fig Fig. 2a,b.Hourly values of gravity records (a), drift corrected gravity residuals (b).Anomalous record with abnormal drift and very large residuals are highlighted in circles. fig.2a) were characterized by an abnormal drift and then very large gravity residuals.As a consequence, the tidal analysis on the 1998-2000 data furnished a sharp decrease of the δ factors at the beginning of 1999(Berrino and Riccardi, 2001).A theoretical value of the instrumental sensitivity was computed and compared with the calibration factors monitored «on site» to evaluate whether the calibration factor truly reflects changes in the instrumental response or is merely due to the adopted «on site» calibration procedures.The theoretical instrumental sensitivity was determined by a regression analysis between the meter's output signal and the synthetic gravity tide.Thus, a set of weekly theoretical values of calibration factor was obtained and compared with the results from the repeated calibrations (fig.5).A good agreement between the temporal evolution of the theoretical factors and those obtained through the «on site» calibration was detected(Riccardi et al., 2002).Moreover, the set of the «on site» and theoretical calibration factors has been plotted against the time occurrence of certain large worldwide earthquakes (ML>5) that shook the meter strong enough to send it out of range (fig.5).A time correlation between the larger changes of instrumental sensitivity and the occurrence of seismic events can be observed.More detailed discussion concerning the instrumental sensitivity changes on the occasion of large earthquakes is given in Fig. 6 . Fig. 6.Location map of the superconducting gravimeter (filled circle) and the stations of the barometric array. Fig. 7a - Fig. 7a-c.The Strasbourg hourly data sets: a) gravity record; b) gravity residuals; c) air pressure.The time spans highlighted in gray are characterized by large and fast pressure changes and are the target of further analyses. Fig Fig. 10a-d.Time behaviour of Vesuvius dynamics from 1959 to 2001: a-b) seismic activity; c) tidal gravity factor for M2 tidal wave; d) gravity changes (µGal) at the Osservatorio Vesuviano station plus gravity and elevation changes at Torre del Greco. Table II . Amplitude (L; in nm s -2 ) and phase (ϕ in degrees) of the main harmonics of the OTL computed for Mt.Vesuvius station by means of different models. Table III . Comparison among tidal gravimetric factors determined during the
9,592.8
2008-02-18T00:00:00.000
[ "Geology", "Environmental Science", "Physics" ]
LEARNING STATE SPACE TRAJECTORIES IN RECURRENT NEURAL NETWORKS: A PRELIMINARY REPORT We describe a procedure for finding BE/Bw^ were £ is an arbitrary functional of th temporal trajectory of the states of a continuous recurrent network and * .. are the weights of that network. An embellishment of this procedure involving only computations that go forward in time is also described. Computing these quantities allows one to perform gradient descent in the weights to minimize E, so our procedure forms the kernel of a new connectionist learning algorithm. Abstract We describe a procedure for finding dE/dwij where £ is an arbitrary functional of the temporal trajectory of the states of a continuous recurrent network and wij are the weights of that network. An embellishment of this procedure involving only computations that go forward in time is also described. Computing these quantities allows one to perform gradient descent in the weights to minimize £, so our procedure forms the kernel of a new connectionist learning algorithm. SUBJECT TERMS (Continue on reverse if mctsssry *nd identify by block number) connectionism, learning algorithm, trajectories following, minimizing f-unctionals ABSTRACT (Continue on reverse if necesury snd identify by block number) We describe a procedure for finding BE/Bw^ were £is an arbitrary functional of th temporal trajectory of the states of a continuous recurrent network and *.. are the weights of that network. An embellishment of this procedure involving only computations that go forward in time is also described. Computing these quantities allows one to perform gradient descent in the weights to minimize E, so our procedure forms the kernel of a new connectionist learning algorithm. We describe a procedure for finding dE/dwij where £ is an arbitrary functional of the temporal trajectory of the states of a continuous recurrent network and wij are the weights of that network. An embellishment of this procedure involving only computations that go forward in time is also described. Computing these quantities allows one to perform gradient descent in the weights to minimize £, so our procedure forms the kernel of a new connectionist learning algorithm. at where j is the total input to unit i, y t is the state of unit z, T, is the time constant of unit /, a is an arbitrary differentiable function 1 , w l y -are the weights, and the boundary conditions y(to) and driving functions I are the input to the system. See figure 2 for a graphical representation of this equation. <r(O = (1 + «-<)-», in which case <r'(O = <T(OO - Consider £(y), an arbitrary functional of the trajectory taken by y between t 0 and ti. 1 Below, we develop a technique for computing dE(y)/dw tJ and dE(y)/$T lt thus allowing us to do gradient descent in the weights and time constants so as to minimize £. The computation of dE/dw tJ seems to require a phase in which the network is run backwards in time, but a trick for avoiding this is also developed. The Equations Let us define (2) In the usual case where £ is of the form £(y) = J^f(y{f),t)dt this means that = 5/(y(0, 0 /dyM-Intuitively, ait) measures how much a small change to yi at time t effects £ if everything else is left unchanged. We also define where y ( '•'^ is the same as y except that dfyjdt has a Dirac delta function of magnitude <J added to it at time f. Intuitively, z,(0 measures how much a small change to y t at time t effects £ when the change to yi is propagated forward through time and influences the remainder of the trajectory. WiJ(T ' (Xj(t))zj(t where the (1 -At/Ti)zi(t) term is due to the linear influence > t (r) has upon y,-(f+40, the Ysj term is due to ^e effect that changing yi(t) has upon the other y/r+^f) through their nonlinear coupling, and the AteM term is due to the effect that changing y L between times t and t + At has directly upon £. By rewriting (5) as 40 assuming this to be of the form z,(0 = z,(/ + 40 -Atdzi/dt(t + 40, and taking the limit as At -• 0 we obtain a differential equation, dt Let at£ = 0 where jK^'*' 0 is the same as y except that w^ is increased by £ from r through t\. Again examining figure 2, we see that the appropriate difference equation for v is
1,037.4
1988-07-24T00:00:00.000
[ "Computer Science", "Mathematics" ]
Neutrons production on the IPHI accelerator for the validation of the design of the compact neutron source SONATE We aim at building an accelerator based compact neutron source which would provide a thermal neutron flux on the order of 4E12 n.s-1.cm-2.sr-1. Such brilliance would put compact neutron sources on par with existing medium flux neutron research reactors. We performed the first neutron production tests on the IPHI proton accelerator at Saclay. The neutron fluxes were measured using gold foil activation and 3He detectors. The measured fluxes were compared with MCNP and GEANT4 Monte Carlo simulations in which the whole experimental setup was modelled. There is a good agreement between the experimental measurements and the Monte-Carlo simulations. The available modelling tools will allow us to optimize the whole Target Moderator Reflector assembly together with the neutron scattering spectrometer geometries. Introduction There is currently an interest in developing compact neutron sources CNS based on low energy proton accelerators (10-100 MeV) [1]. Such sources could serve as neutron sources for Boron Neutron Capture Therapy [2][3] or as neutron for neutron scattering to replace small ageing nuclear reactors [4]. There are already several projects of CNS on-going around the world. The currently most advanced is LENS Low Energy Neutron Source at Indiana University [5]. The CNS operating or under construction have gathered into the UCANS, Union for Compact Accelerator-driven Neutron Sources [6]. At Saclay we are considering a similar possibility to replace the Orphée research reactor. Our first aim has been to experimentally validate the neutron production and moderation obtained by Monte Carlo simulations using either MCNP [7] or GEANT4 [8]. Once reliable simulation tools are available we shall be able to estimate the performances of a CNS for neutron scattering experiments (from the source to the spectrometer) and compare its performances to existing facilities (reactor or spallation based). As a starting point we have used the IPHI proton accelerator which shall eventually be able to produce high current proton beams (up to 100 mA CW) with proton energies of 3 MeV. Our final goal is to eventually boost the proton energy to 20 MeV. .1 Proton source The experiments have been performed on the IPHI proton accelerator which is based at CEA Saclay, France [9]. The accelerator consists of a proton source SILHI of energy 95 keV, a Low Energy Beam Transport Line and a Radio Frequency Quadrupole to accelerate the protons to an energy of 3 MeV. The accelerator is designed to operate in continuous mode with proton currents up to 100 mA which corresponds to a total power of 300 kW. For the current experiments dedicated to validations of neutron production and moderator Monte Carlo simulations we have operated the accelerator at a very low power of about 10 W, both to avoid any target damage and for radioprotection issues. The accelerator was operated in pulsed mode with proton pulses of length 100 µs and with a repetitoin rate of 1 Hz and a peak current of 30 mA. Rather narrow thermal neutron pulses are thus obtained so that precise time-of-flight measurements can be performed. In order to produce neutrons we opted for a beryllium target of thickness 0.5mm (99.0%) which stopped all incident protons. The target was attached on an aluminum support with titanium screws in order to minimize neutron activation. This support was air cooled. The proton beam size was limited to 16mm in diameter. The proton current incident on the target was continuously measured so as to have a precise value of the incident particle flux and be able to precisely estimate the neutron production. An electron repelling electrode set at a potential of 200 V was set in front of the beryllium target so as to prevent any bias in the target current measurement. The target was installed in a polyethylene (PE) moderator box (300x300x400 mm 3 ) so as to cool down the neutron to thermal energies (around 26meV). A 20 mm diameter hole was drilled through the moderator from the position where the thermal neutron density was expected to be the highest to the outside of the PE box (see Figure 2a). In order to change the moderator geometry, it was possible to insert PE plugs inside the exit hole so as to fill more or less the neutron extraction channel. The whole experimental setup (accelerator -target -moderator -detectors) was installed in a 2 m thick concrete casemate. Neutron detection Several diagnostic tools were installed. In order to monitor the fast neutron production (via the proton incident on the Be target and/or accelerator elements), a Bonner sphere was installed in the casemate. We set-it up as close as possible from the thermal neutron detectors (at 8.4 m from the target). A set of five vertical 3 He tubes (diameter 50mm, length 130 mm was installed in a PE box (20 mm thick PE + an extra B4C elastobore lining) but without any front collimation. The large array of detectors did not provide quantitative results because the lack of front collimation led to the measurement of "spurious neutrons" moderated in the casemate walls. Besides since we were operating in pulsed mode with rather short pulses, the neutron flux was very close or above the neutron detector saturation (20 kHz). Thus in order to provide reliable quantitative results, a single horizontal detector 3 He was used (diameter 50mm, length 130mm) and set at 8.4m from the target in a heavier shielding box providing some front collimation (angular opening of about 10°). The 3 He pressure in the tube was 5 bars so that over the tube length of 130mm, the detection efficiency is 100% down to 0.9 Å. It slightly drops to a value of 95% at 0.6 Å and 65% at 0.2 Å. Hence it can be considered that the thermal neutron spectrum does not need to be corrected for detection efficiency. One may be worried by the long length of the detector which may bias the ToF measurements. If we aim for a time resolution of 1%, we need to have the neutrons detected in the first 1% x 8.4m = 84mm of the tube. For neutron wavelengths above 0.7 Å, 90% of the particles are detected in these first 84 mm of the 3 He tube. Hence we can assume that the ToF is measured with an accuracy better than 1% for thermal neutrons. This detector was covering a solid angle of 0.3x0.3° so that the instantaneous neutron flux was significantly reduced. A neutron sensitive image plate was also used but is essentially provided information about the gamma fluence. Being able to quantify all types of secondary particles produced during the stripping reaction (fast neutrons, gamma radiations and thermal neutrons) is important in order to be able to design properly shielded neutron scattering instruments. Two gamma sensitive detectors were thus also installed around the moderator. Thermal neutrons (1.8 Å) travel at a speed of 2200m/s so that the travel time over 8.4 m is 3.8 ms. For a proton pulse width of 100 µs, the energy resolution is thus on the order 0.1/3.8 = 2.6% which is good enough to measure a rather precise neutron energy distribution. Note that besides the proton pulse length, the moderation process leads to an intrinsic neutron pulse broadening which reduces the energy resolution. Monte-Carlo simulations provide an estimate of a broadening on the order of 100-150 µs in a PE moderator [10][11]. In order to validate the Monte-Carlo simulations inside the moderators, gold disk were positioned at various positions inside the moderator. Disks with a diameter 6 mm and thickness 200 µm were used. The weight of these disks was in the 100 mg range so that even when operating at 10W, measurable activation of the gold was achieved within 15-30 minutes of operation. Bare disks and disks enclosed in Cadmium were measured so as to be able to estimate the thermal and fast neutron flux [10]. The Cd absorption cut-off edge is at around 500 meV ( = 100 barn) so that the bare disk are activated by the whole neutron spectrum irradiation and the Cd covered disk are only activated by the fast neutrons. These gold activation measurements provide a simple and quantitative way of measuring the thermal neutron flux. However, while the measurement of the gold activation is quantitative and reliable, in order to calculate an absolute neutron flux, an a priori knowledge of the neutron energy distribution is necessary. Hence either an assumption of neutron energy distribution has to be made (Maxwellian distribution centered around 26meV for example) or the activation must be calculated using a simulated neutron energy distribution. Gold disks (with and without Cd casing) have been put into the moderator model and their activation has been calculated during the Monte Carlo simulations. The measured activation values have then been compared with the calculated activation values. Gold-foil activation Gold disks were activated at various positions inside the moderator (at 70mm and at 210mm from the maximum thermal flux position). A third measurement was performed with a very small moderator of diameter (66mm) and thickness (50mm) set at the back of the target. The gold disks were irradiated for durations ranging from 15 to 22 minutes which led to measurable gamma activity from 198 Au (except for the fast neutron measurement, with Cd, at 210mm) which was below the detection limit. It is possible to have a first estimate of the neutron flux by assuming that inside the moderator we have a thermal Maxwell Boltzmann distribution. By making this assumption it is possible to estimate the neutron flux corresponding to the measured activity. A neutron activation calculator [12] tells us that a neutron fluence of 10 8 n/cm² gives rise to an activation of 90 Bq/g of 198 Au. Previous measurements using a very similar moderator geometry were performed at a proton energy of 10 MeV and at proton currents of 30 µA [13]. The neutron brilliance at a distance of 70 mm from the moderator axis, also measured by the gold foil method was 3x10 9 n/cm²/s ( Figure 3 in Allen et al). It is possible to rescale our results by applying a neutron yield gain factor of 48 between proton energies of 3 MeV to 10 MeV [4] and a factor 30 to renormalize per µC. The renormalized flux (from 10 to 3 MeV) would thus be 2x10 6 n/cm²/µC. Both values are consistent, especially considering the fact that the neutron energies and the moderator geometries are somewhat different. Time-of-flight measurements The time-of-flight measurements were performed by setting a single 3 He detector at a distance of 8.4m in the direction of the extraction hole. The incident neutron flux was measured with time channels of width 20 µs. The start of the acquisition was synchronized with the clock signal of the accelerator radiofrequency. Each ToF spectra was acquired in 5 to 10 minutes. Figure 3 shows examples of ToF spectra measured in various moderator geometry. The neutron signal was measured with the bare target (without any PE moderator around it, dark blue signal). A small moderator ("Mini-moderator") consisting of a block of diameter 66mm and thickness 50mm was set right after the target. Then the neutron signal was measured with the PE box around the target and several filling of the extraction hole (hole totally empty, filled with a 50mm long rod and filled with a 130mm rod). For these last 3 measurements, the moderator box was encased in a B4C shielding so as to measure only the neutrons exiting the moderator from the 20mm diameter Cd hole (see Figure 2a). The first time channels (~100 µs) correspond to epithermal neutrons with energy above 1 eV. Thermal neutrons (~26 meV, 1.8 Å) take 3.8 ms to travel over the 8.4 m. In the case of the measurements performed with the PE moderator box, one observes quite a lot of epithermal neutrons (below 1ms), the more so as the presented data are not corrected for detector efficiency. This is followed by a thermalized neutron peak which has a maximum at around 2.5ms, corresponding to a neutron wavelength of 1.2A°. The maximum of the distribution correspond to a Maxwellian distribution centered around a temperature of 350K (see Figure 3c). The neutron spectra are very close in the case of an empty hole or with a 50 mm plug. In the case of the empty hole, the distribution is a bit wider which might be accounted for by the fact that the emission surface is ill define in this case. On the other hand, when the extraction hole is totally filled there are very few neutrons emitted by the moderator. One may wonder why so many epithermal neutron are observed. This can be rather simply explained by the fact that around the moderator, the B4C shielding is only stopping thermal neutrons while all the epithermal and fast neutrons are still emitted from the whole moderator volume and not only from the 20 mm diameter hole as the thermal neutrons. Hence, the epithermal and thermal flux cannot be quantitatively compared. The measurement without any moderator and with a small moderator (green and dark blue) show of course significantly more epithermal neutrons. They also show a hump around the thermal peak at 2ms which can be accounted by moderation in some parts of the accelerator. The third striking point is that a long tail of slow neutrons (t>4ms) is clearly visible on Figure 3b in log scale. This does not reflect the presence of long wavelength neutrons but rather the fact that fast neutrons are moderated in the concrete of the casemate and are then travelling back as thermal neutrons into the detectors. These various effects make these measurements very difficult to exploit quantitatively. Diffraction experiment In order to have a measurement of the neutron pulse shape and of the time-of-flight parameters, we used a graphite single crystal. The LENS team is using a Ge crystal which provides better peak shapes [5]. In the transmission geometry, crystal at 0° and detector in the direct beam, the results were not satisfactory due to a high background level on the detector. Hence, we performed the measurement in a diffraction geometry with the crystal set at 45° with respect to the incident neutron beam. The crystal was installed at 8.4 meters from the source and the detector was set at 90° from the incident neutron beam. Hence it was possible to perform a Time-of-flight diffraction experiment on the (00l) graphite diffraction peaks. Due to the 90° position of the detector, the background noise was significantly reduced compared to the transmission geometry. The incident spectrum was first measured in the direct beam without crystal. The diffraction spectrum was then measured with the detector at 90° and at a distance of 30 cm from the crystal. The diffracted spectrum was divided by the incident spectrum to normalize the neutron intensities. Gamma production The production of gamma radiation was followed during the whole experiments. Without any moderator, the measured gamma activity was 70 µSv/h at a distance of 2m from the target. With the PE moderator set in place and without any dedicated gamma radiation shielding, the gamma background was measured at around 50 µSv/h at 2 m from the source. Monte-Carlo simulation (MCNP or GIANT4) provide the gamma radiation spectrum generated by the Target-Moderator assembly. Image plate measurements with lead absorbers allowed to estimate that at 2 meters from the moderator assembly, the peak gamma spectrum energy was in the 100 keV range. Discussion GEANT4 was used to perform Monte-Carlo simulations of a Target-Moderator assembly to calculate the neutron flux produced in a stripping reaction between protons at 3 MeV and beryllium. We have shown that very low power (~10 W) is sufficient to perform a characterization of the source. Nevertheless higher power (~100W) would be more comfortable to perform diffraction experiments. From these results it is possible to make a first extrapolation of the neutron brilliance which could be achieved by scaling the proton energy from 3MeV to 20MeV and the proton current from 1 µA to 100 mA. A gain in the neutron yield of a factor 200 can be achieved by increasing the proton energy from 3 MeV to 20 MeV. The brilliance at the surface of the moderator would be 1.2x10 6 n/cm²/µA/s x 10 5 µA x 200 = 2.4x10 13 n/cm²/s. This brilliance is close to the brilliance of the Orphée reactor at the entrance of the guide systems which is on the order of 1.5x10 14 n/cm²/s. It is likely that an optimized moderator geometry could rise the brilliance on par with that of the Orphée reactor. However, such an accelerator based source could not operate in continuous mode due to the huge heat load on the target (20 MeV x 100 mA = 2 MW). Such a source should be operated in pulsed mode with a typical duty cycle on the order of 2-4 %. This would correspond to a power load on the target on the order of 40-80 kW which is way more manageable. A detailed account on the possibilities of such a source to perform neutron scattering experiments will be published elsewhere [14]. Conclusion and outlook The IPHI accelerator was operated for the first time to produce thermal neutrons. Further experiments will be performed to test cold moderator geometries. Besides, the issue of a beryllium target subject to an intense proton current is a delicate engineering issue. At least a dozen groups are working on this issue, either to produce neutron for physics experiments or for Boron Neutron Capture Therapy. It is thus very likely that solutions will be found in the short term to be able to operate such sources at power in the 100kW range. Hence compact neutron source would probably demonstrate
4,151
2016-12-01T00:00:00.000
[ "Physics", "Engineering" ]
Beyond $\mathcal{R}(D^{(*)})$ with the general 2HDM-III for $b\to c\tau\nu$ We review the parameter regions allowed by measurements of $\mathcal{R}(D^{(*)})$ and by a theoretical limit on ${\cal B}(B_{c}\to\tau\nu)$ in terms of generic scalar and pseudoscalar new physics couplings, $g_s$ and $g_p$. We then use these regions as constraints to predict the ranges for additional observables in $b\to c\tau\nu$ including the differential decay distributions $d\Gamma/dq^{2}$; the ratios $\mathcal{R}(J/\psi)$ and $\mathcal{R}(\Lambda_{c})$; and the tau-lepton polarisation in $B\to D^{(\star)}\tau\nu$, with emphasis on the CP violating normal polarisation. Finally we map the allowed regions in $g_s$ and $g_p$ into the parameters of four versions of the Yukawa couplings of the general 2HDM-III model. We find that the model is still viable but could be ruled out by a confirmation of a large $\mathcal{R}(J/\psi)$. Introduction Amongst the most interesting current results in B physics, the searches for lepton universality in semileptonic B decays stand out. On the experimental side, hints at deviations from the standard model (SM) in some of these modes have existed for several years, withB → Dτ ν being measured by BaBar [1,2] and Belle [3]; and withB → D τ ν being measured by BaBar [1,2], Belle [3][4][5] and LHCb [6,7]. On the theoretical side, many extensions of the SM violate lepton universality whereas the SM does not. The tests involve comparing semileptonic B decays into tau-leptons to those with muons and electrons through ratios such as, where l represents either e or µ. The current values for these quantities hint to the existence of new physics, as can be seen when comparing the current HFLAV averages [8], to the current SM predictions from the lattice for R(D) [9,10] or from a range of models for R(D ) [11,12], For our new calculations in this paper, we will use the CCQM model for form factors which yields somewhat lower values for these quantities albeit with larger errors, R SM (D) = 0.27 ± 0.03 and R SM (D ) = 0.24 ± 0.02. A related measurement, B + c → J/ψτ + ν τ , has been reported by LHCb [13] and also hints to disagreement with the SM, although the errors are too large at present to reach a definitive conclusion, Different predictions for the SM arising from different models for form factors produce a range 0.24 to 0.28 [14][15][16][17][18] which is about 2σ lower that the LHCb result. With the CCQM form factors we obtain R(J/ψ) SM = 0.24 ± 0.02 , which we use as the SM prediction in our numerical analysis. Not surprisingly, these anomalies have generated enormous interest in the community. From the experimental side, we expect a measurement of the corresponding ratio for semileptonic Λ b → Λ c τ ν, R(Λ c ) to be reported soon. From the theory side there have been several proposals for additional observables to be studied in connection with these modes such as the tau-lepton polarisation [19][20][21][22][23][24]. In fact, the Belle collaboration has already reported a result for the longitudinal tau polarisation inB → D * τ −ν τ [5] P τ L (D * ) = −0.38 ± 0.51 +0.21 −0.16 , a result in agreement with the SM prediction [19] P τ L (D * ) SM = −0.497 ± 0.013, albeit with large uncertainty. There have also been a large number of theory papers interpreting these results in the context of specific models, including additional Higgs doublets, gauge bosons and leptoquarks [19,23,. One of the first possibilities considered was the 2HDM type II, where BaBar [2] determined it was not possible to simultaneously fit R(D) and R(D ). However, a charged Higgs with couplings proportional to fermion masses is an obvious candidate to explain non-universality in semitauonic decays, prompting consideration of the more general 2HDM-III. Several authors have examined the flavour phenomenology of the 2HDM-III in the context of the anomalies mentioned above. Refs. [19,27,57] concluded that it is possible to explain R(D) and R(D ) in this way after considering existing flavour physics constraints. More recently, Ref. [23,58], add an analysis of the longitudinal tau-lepton polarisation and forward-backward asymmetries in b → c/u τ ν decays within the 2HDM-III. In this paper we revisit the b → cτ ν modes in the presence of new (pseudo)-scalar operators to include several new results. We begin in Section II with a review of the constraints imposed by the measurements of R(D ( * ) ) and the theoretical limit on B(B c → τ ν) [59,60]. We then use these constraints to obtain the predicted ranges for R(J/ψ), the tau polarisation in B → D ( * ) τ ν decays, the differential decay rates and the ratio R(Λ c ) in Section III. We pay particular attention to the transverse tau polarisation which is T -odd [20,21,[61][62][63][64][65] as the 2HDM-III model allows for CP violation and would naturally give rise to this effect. We also consider the dΓ/dq 2 distributions [40] in B → D ( * ) τ ν but find that they offer no discriminating power in this case. They do serve to illustrate the CCQM model for the form factors. In Section IV we review the basics of the general two Higgs doublet model and the four different parameterizations for its Yukawa couplings. We then map this parameter space into the generic allowed regions obtained in Section II, finding they are completely accessible to this model. Finally, in Section V we conclude. b → cτ ν constraints on new (pseudo)-scalar couplings The effective Hamiltonian responsible for b → cτ ν transitions that results from the SM plus the 2HDM-III can be written in terms of the SM plus generic scalar operators in the form, where C cb SM = 4G F V cb / √ 2 and the operators are given by As the existing constraints will apply separately to the scalar and the pseudoscalar couplings, it is convenient to define The effect of the effective Hamiltonian, Eq. 8, on the ratios R(D ( * ) ) is known in the literature [11,27,57] and can be written as ratios r D ( * ) = R(D ( * ) )/R SM (D ( * ) ), r D = 1 + 1.5 Re (g S ) + 1.0 |g S | 2 , r D * = 1 + 0.12 Re (g P ) + 0.05 |g P | 2 . A few remarks are in order. First, Refs. [30,57] observe that the coefficient of |g S | 2 can be changed from 1.0 to 1.5 to approximate some detector effects in BaBar. As we use the HFLAV average value for r D from both BaBar and Belle results, we will not include this correction in our numerics. Second, the CCQM model we use for the form factors leads to the slightly different expression r D * = 1 + 0.1 Re(g P ) + 0.03 |g P | 2 , but with larger theoretical errors. We will discuss the effect of this below. It is also known that there are values of C cb L and C cb R that can explain both of these ratios, and that the possible solutions become tightly constrained when one also requires that B(B c → τ ν) ≤ 30% [59], which for NP given by scalar operators implies that the ratio be smaller than around 14.6. An even tighter constraint, by a factor of three, is advocated in Ref. [60]. We summarize these results in Figure 1. On the left panel we consider the constraint on g S which arises solely from satisfying R(D) at the 2σ level and appears as the blue ring. The black ring shows the effect of approximating the BaBar detector effects as suggested by Refs. [30,57]. The central panel shows the constraints on g P : the red ring arising from satisfying r D * at the 2σ level and the green circle from B(B c → τ ν) ≤ 30%. The small combined allowed region shows the tension between these two requirements. On the right panel we illustrate these combined constraints on g P as the red crescent shape. If one adopts the condition B(B c → τ ν) ≤ 10% [60] instead, there is no allowed region that also satisfies r D * at the 2σ level, but there is one at the 3σ level and we show this in black. As mentioned above, the expression for r D * with the CCQM form factors is slightly different but with larger errors which allow a larger overlap with B(B c → τ ν) ≤ 30% and this is shown as the orange crescent. For our predictions in the next section we will use the blue ring in the left panel and the red crescent in the right panel. Some, but not all, of these results have appeared before in the literature. For example Refs. [66,67] do not include a constraint from B(B c → τ ν) in their results. The small crescent region where these two intersect is the constraint of g P that we use for our predictions. This region is magnified as the red crescent on the right panel where it is also compared with the larger orange region which uses the CCQM form factors for R(D * ), and with the black region which shows the intersection between R(D * ) at the 3σ level and B(B c → τ ν) ≤ 10%. Differential decay distributions for B → D ( ) τ ν. In Figure 2 we compare the distributions dΓ/dq 2 for B → D ( ) τ ν using the CCQM form factors with parameter values from Ref. [66]. The results indicate that the predicted spectrum is in good agreement with the measurements within the CCQM uncertainties (which the authors of Ref. [66] estimate at about 10%). The modifications to these predictions from g P and g S as constrained above are indistinguishable from the SM within this level of accuracy. R(J/ψ) As already mentioned, there is also a more recent measurement of R(J/ψ) given in Eq. 4, which can be used as an additional test of the model. Using the form factors shown in the appendix with CCQM values from Ref. [68], this can be written in terms of generic scalar coefficients as Note that this result is almost identical to that for r D when the CCQM form factors are used for that case as well. The differential distribution dΓ/dq 2 for B c → J/ψτ ν receives tiny corrections from g S,P as constrained above, making it indistinguishable form the SM one. In Figure 3 we show the prediction for R(J/ψ) that is consistent with the measured R(D ) at 2σ as well as B(B c → τ ν) ≤ 30%. The largest prediction (∼ 1.075) is about 1.5σ away from the LHCb measurement thanks to its present large uncertainty, which in terms of this ratio is r J/ψ = 2.5 ± 1.0. A confirmation of a large value for r J/ψ can potentially rule out (pseudo)-scalar explanations of these anomalies. Im(gP) r J/ψ Figure 3: Predictions for r J/ψ compatible with the measured R(D ) at 2σ as well as Polarisations In general, we can define normal, longitudinal and transverse polarisations of the τ lepton as a function of q 2 in terms of the vectors [64], Of particular interest is the normal polarisation, P τ N (D ( ) ), which is generated by CP violating phases that arise from extended scalar sectors or Yukawa flavour changing couplings 1 . This observable is very small in the SM, where it can only arise due to unitarity phases in electroweak loop corrections [20,21,[61][62][63][64][65]. With the numerical CCQM form factors of Ref. [64], we find that new (pseudo)-scalar complex couplings lead to (15) in the allowed parameter regions obtained above. In particular we see that P τ L (D ) as measured by Belle [5] is consistent with all the predictions given the current large uncertainty. The figures also indicate that a large CP violating P τ N (D) polarisation is possible. Λ b → Λ c lν decays As mentioned in the introduction, there is one more ratio in the b → cτ ν family that is expected to be measured soon by LHCb, namely R(Λ c ). In terms of the CCQM form factors we show the differential decay rate [67,69] in the appendix. From the partial decay width Eq.(52), we first obtain the Λ b → Λ c µν µ normalised spectral distribution for the SM and compare it with the one measured by LHCb [70] in Figure 6. The green and yellow shaded areas indicate the estimated 10% and 20% errors in the prediction according to [64]. Once again, this figure serves to calibrate the performance of the CCQM form factors in this case. We also find in this case that the spectral distribution with new g S,P couplings constrained as above, cannot differentiate between the models. We turn to a prediction for R(Λ c ) which is defined analogously to the previous ratios, With the form factors in the appendix this leads to It also leads to R(Λ c ) SM = 0.295, which compares well with other values found in the literature R(Λ c ) SM = 0.33 ± 0.01 [69]. Figure 7 shows R(Λ c ) with new contributions from g S or g P in their allowed ranges. As Eq. 17 shows no interference between g S and g P , the two new contributions simply add. General two Higgs doublet model The most general 2HDM-III, unlike the type I and type II more common versions, allows flavour changing neutral currents (FCNC) at tree-level which are then suppressed with family symmetries, minimal flavour violation, or specific patterns for the Yukawa couplings, for example. The most general renormaliseable quartic scalar potential is commonly written as [71], where the two scalar doublets are Discrete symmetries in 2HDM type I and II force the parameters µ 12 and λ 6,7 to vanish. The charged Higgs bosons that appear in the mass eigenstate basis correspond to the combinations with the rotation angle given by tan β = t β = s β c β = υ 2 υ 1 and υ 2 2 + υ 2 1 = υ 2 with υ = 246 GeV. There are three neutral scalars that are not CP eigenstates as the parameters µ 12 and λ 6,7 can be complex and violate CP. These, however, will not play any role in our discussion beyond the occasional use of existing constraints on the mixing amongst the neutral scalars. The most general Yukawa Lagrangian in the 2HDM-III without discrete symmetries is given by where Φ 1,2 = iσ 2 Φ * 1,2 , Q L and L L denote the left-handed quark and lepton doublets, u R , d R and l R the right-handed quark and lepton singlets and Y u,d,l After spontaneous EWSB and in the fermion mass basis, the charged Higgs couplings to fermions can be written as: where f (x) = √ 1 + x 2 . This form follows the notation of Refs. [72][73][74] in which the first term in each line in Eq. 22 is the coupling in one of the four 2HDM without FCNC and the second term is a flavour changing correction that makes it a type III model. Furthermore, the Cheng-Sher ansatz [75] has been implemented to control the size of the FCNC, but also allowing a CP violating phase: The additional parameters that occur as a consequence of allowing flavour changing couplings areχ q.l ij . The parameters X, Y and Z given in Table 1 are the ones that occur in each of the four types of 2HDM with natural flavour conservation. We now turn to the question of the scalar coefficients in Eq. 8 within the. context of the 2HDM-III considered here. Tree-level exchange of the charged Higgs produces 2HDM-III Assuming that the parametersχ u i,j are of the same order and that theχ d i,j are also of the same order, as we expect in the context of the Cheng-Sher ansatz, the contributions from the heaviest fermions dominate the sums and Eq. 25 reduces to The allowed parameter regions of Figure 1 then imply constraints on the parameters m H ± , tan β,χ u,d,l ij which we discuss next in some detail. The general result is that it is possible to reach the allowed regions in Figure 1 with parameters of the model. Ref. [76] finds solutions for generalised models which can be written in terms of our Eqs. 26 with the factorsX,Ỹ ,Z being arbitrary parameters, independent of tan β. The solutions they find occur for points withX ∼ O(10),Ỹ ∼ O(100),Z ∼ O(100) and m H < 550 GeV. Once we allow for FCNC, all four cases of 2HDM-III can be mapped into the allowed regions in Figure 1. In Figures 8-11 we illustrate the results in two dimensional projections of parameter space. In all cases we present three figures. In the first one we consider the plane tan β − m H ± and scan over all the real and imaginary parts of the χ parameters looking for points that satisfy the primary constraints R(D * ) at 2σ and B(B c → τ ν) ≤ 30%. In the second and third plots we illustrate regions of the parameter space of theχ's where the constraints are satisfied, in particular we specifically show solutions in the vicinity of g s = −0.5 + 0.7i and g P = 0.63 as these two points lie well inside the allowed regions of Figure 1. With solutions in this region of parameter space we findX,Ỹ ,Z are O(10) to O(1000). It is important to emphasise, however, that these are only illustrations and that there are infinitely many solutions. Looking at the four models then, • Model I. We present numerical results for this case in Figure 8. On the left panel we illustrate the region where solutions exist in the tan β − m H ± plane. We see that a lower value of tan β and/or m H ± is needed to obtain solutions with smaller values of |χ u,d,l |. The region shown is dominated by low values of tan β which are compatible with constraints from LHC and LEP on the flavour conserving version of this model as seen in Figure 4 of Ref. [76]. Figure 9 of the same reference indicates that values of tan β 2 are ruled out by B decay constraints. These constraints, however, can be significantly modified by flavour changing parameters such asχ d bs . We are not aware of any global fit to the full set of parameters in the general 2HDM. Figure 1 shown in green. • Model II. We present numerical results for this case in Figure 9. On the left panel we illustrate the region where solutions exist in the tan β − m H ± plane. We see that in this case a higher value of tan β and/or a lower value of m H ± is needed to obtain solutions with smaller values of |χ u,d,l |. The tan β − m H ± region of solutions in this case is consistent with the constraints on the corresponding flavour conserving version of this model in Ref. [76]. The centre and right panels illustrate that solutions consistent with the Cheng-Sher ansatz exist in this case. Figure 1 shown in green. • Model X. We present numerical results for this case in Figure 10. On the left panel we illustrate the region where solutions exist in the tan β − m H ± plane. We see that in this case a higher value of tan β and/or a lower value of m H ± is needed to obtain solutions with smaller values of |χ u,d,l |. This scenario is similar to Model II in that the tan β − m H ± region of solutions is consistent with the constraints on its corresponding flavour conserving version as per Ref. [76] (called type IV in that reference). The region illustrated on the centre and right panels needs |χ d,l | values larger than what the Cheng-Sher ansatz would suggest are natural. However, the left panel indicates that there are other solutions which are also consistent with this ansatz. • Model Y. Finally, we present numerical results for this case in Figure 11. On the left panel we illustrate the region where solutions exist in the tan β − m H ± plane. We see that in this case a higher value of tan β and/or a lower value of m H ± is needed to obtain solutions with smaller values of |χ u,d,l |. The tan β − m H ± region of solutions is once again consistent with its corresponding flavour conserving version [76] (called type III in that reference), although the overlap region mostly lies in the upper range of both tan β and m H ± shown in the left |χ  |≤20, |χ  |≤8 panel. This panel also suggests that in this case, theχ parameters are required to be larger than expected in the Cheng-Sher ansatz. Additional considerations that may restrict the parameters in the general model arise form Yukawa couplings to the neutral (SM-like) Higgs defined as Once we introduce non-zero couplingsχ u ct ,χ u ct ,χ l τ τ as in Eq. 26, they also appear in g hτ τ , g hct and g hsb , and are given by These expressions simplify in the alignment limit, defined as cos(β − α) → 0, in which case the couplings of h tend to the SM Higgs couplings. To linear order in cos(β − α) we obtain for models II, X The first constraint arises from the process h → τ + τ − , for which the measured signal strength is [77] leads to −0.32 |h l τ τ | 2 − 1 0.58 (31) at the 95% confidence level. In addition, if the flavour changing couplings get too large they will conflict with the non-observation of t → ch and with indirect limits on h → bs. For B(t → hc) < 0.22% at 95% c.l. [78] one finds cos(β − α) sin βχ u ct The process h → bs has not been constrained yet, but it has been argued in the literature that a branching ratio as large as B(h → bs) ∼ 36% can remain consistent with other flavour results in these types of models [79]. Adopting this number and with the 95% c. l. Γ H < 0.013 GeV [78] we find, The constraints in Eqs. 31-33 depend on cos(β −α) and disappear in the alignment limit. Ref. [80] presents upper bounds on cos(β − α) of O(0.1) that depend on tan β for the four types of flavor conserving models, so they do not automatically extend to our case. Summary Conclusions We have revisited the 2HDM-III as a possible explanation for the R(D ( * ) ) anomalies. We first summarised the constraints known in the literature in terms of generic (pseudo)scalar couplings and discussed the possible conflict between R(D * ) and B(B c → τ ν) ≤ 30%. We found that the parameter space that can explain these two anomalies at the two-sigma level is limited to the region B(B c → τ ν) > 23%. The bound B(B c → τ ν) > 10% advocated in Ref. [60] in turn restricts the possible explanation of R(D ( * ) ) to the 3σ level within these models. Armed with these constraints we predicted the ranges of other observables in b → cτ ν reactions, including R(J/ψ) and R(Λ c ). We find that the large central value in the current measurement of R(J/ψ) is consistent with this model at about the 2σ level with the currently large experimental error, but that a more precise measurement of this quantity could place it in conflict with R(D ( * ) ). We found that the distributions dΓ/dq 2 in B → Dτ ν, B → D ( * ) τ ν or B c → J/ψτ ν cannot distinguish between the SM or models with new (pseudo)-scalar couplings. We presented predictions for the tau-lepton polarisation in B → D ( * ) τ ν in the presently allowed region of parameter space. In particular we find that phases in the Yukawa couplings can produce substantial T-odd normal polarisations. We considered four versions of the 2HDM-III which are constructed by extending the four flavour conserving 2HDM with the addition of flavour changing couplings that we have limited in size with the Sher-Cheng ansatz. We mapped the allowed regions in g P − g S into the parameter space of these four models. We found that the allowed (m H ± , tan β) ranges also satisfy the LHC and LEP constraints found in the literature for the flavour conserving versions of these models. We also found that the allowed regions of parameter space are not further constrained by h → τ τ , t → hc, h → bs. A Helicity Amplitudes The invariant form factors describing the hadronic transitionsB → D and B → D * are defined as usual where P = p 1 + p 2 , q = p 1 − p 2 , and 2 is the polarization vector of the D * meson which satisfies † 2 · p 2 = 0. The particles are on their mass shells: p 2 1 = m 2 B and p 2 2 = m 2 D ( * ) . All the expressions are written in terms of helicity form factors, which are related to those in Eq. 34 for the B → D transition by [64] where |p 2 | = λ 1/2 (m 2 B , m 2 D ( * ) , q 2 )/2m B is the momentum of the daughter meson with For our numerical estimates we use the helicity amplitudes calculated in the covariant confined quark model (CCQM) with the double-pole parameterisation of Refs. [64,66]: Similarly, for the Λ b decay we need the vector and axial current form factors [69,81] which satisfy the parity relations, where λ 2 and λ W denote the helicities of the daughter baryon Λ c and the virtual W boson respectively. In the SM the helicity amplitudes H V (A) λ 2 ,λ W are given by [81] H with M ± = M Λ b ± M Λc and Q ± = M 2 ± − q 2 . The helicity amplitudes for scalar and pseudo-scalar operators needed for 2HDM are [67] H SP λ 2 ,0 =H S λ 2 ,0 − H P λ 2 ,0 , In this way, the partial decay width of the Λ b → Λ c lν process is given by [67] dΓ where δ l = m 2 l /2q 2 , ij = (− B Polarisations Following Ref. [64], the ratios R(D ( * ) ) are given by with and g S ≡ C cb L + C cb R /C cb SM and g P ≡ C cb L − C cb R /C cb SM . In terms of Eq. 55, the longitudinal differential polarisation will be, Similarly, the transverse polarisation is given by Finally, in the presence of CP-violating phases in the NP Higgs exchange amplitude, there is a normal differential polarisation that reads 2 dP τ N (D) To calculate the integrated, or q 2 averaged polarisations, one has to include the q 2 -dependent phase-space factor C(q 2 ) = |p 2 |(q 2 − m 2 τ ) 2 /q 2 [64], . 2 Note that there is a typo in Eq. 41 of Ref. [64] where the denominator of P (D) N (q 2 ) should have a 4 instead of a 2. We thank C. T. Tran for confirming this.
6,571.6
2018-05-10T00:00:00.000
[ "Physics" ]
A Fault Diagnosis Strategy for Analog Circuits with Limited Samples Based on the Combination of the Transformer and Generative Models As a pivotal integral component within electronic systems, analog circuits are of paramount importance for the timely detection and precise diagnosis of their faults. However, the objective reality of limited fault samples in operational devices with analog circuitry poses challenges to the direct applicability of existing diagnostic methods. This study proposes an innovative approach for fault diagnosis in analog circuits by integrating deep convolutional generative adversarial networks (DCGANs) with the Transformer architecture, addressing the problem of insufficient fault samples affecting diagnostic performance. Firstly, the employment of the continuous wavelet transform in combination with Morlet wavelet basis functions serves as a means to derive time–frequency images, enhancing fault feature recognition while converting time-domain signals into time–frequency representations. Furthermore, the augmentation of datasets utilizing deep convolutional GANs is employed to generate synthetic time–frequency signals from existing fault data. The Transformer-based fault diagnosis model was trained using a mixture of original signals and generated signals, and the model was subsequently tested. Through experiments involving single and multiple fault scenarios in three simulated circuits, a comparative analysis of the proposed approach was conducted with a number of established benchmark methods, and its effectiveness in various scenarios was evaluated. In addition, the ability of the proposed fault diagnosis technique was investigated in the presence of limited fault data samples. The outcome reveals that the proposed diagnostic method exhibits a consistently high overall accuracy of over 96% in diverse test scenarios. Moreover, it delivers satisfactory performance even when real sample sizes are as small as 150 instances in various fault categories. Introduction The utilization of analog circuits is prevalent throughout numerous electronic devices, such as communication equipment and control systems.Because their components have poor tolerance, they are more susceptible to interference and influence.According to relevant surveys, approximately 80% of all electronic circuits employed within electronic equipment are digital circuits.While digital circuits account for the majority of electronic circuits, analog circuits continue to make up a small portion, around 20% of the total, resulting in excess of 80% of system faults [1].In analog circuit faults, complex faults can directly cause the system to malfunction and are easily detected.Soft faults are mainly caused by abnormal changes in resistance, capacitance, and inductance parameters, and the measurement is complex and challenging [2].Since analog circuits are composed of nonlinear and fault-tolerant components, the insufficient number of measurable nodes and measurement uncertainty make fault diagnosis complex.Therefore, realizing soft fault diagnosis at the component level is still challenging.Over the past few decades, numerous scholars have delved into this field, proposing various types of fault diagnostic approaches [3][4][5][6].The field of artificial intelligence has experienced remarkable advancement in recent years, enabling the formulation and successful application of various deep learning models for fault diagnosis due to their excellent independent feature extraction capabilities and outstanding complex process generalization capabilities [7,8].Recently, a deep neural network islanding detection technique based on statistical features was proposed in reference [9], and non-islanding disturbances were classified for hybrid systems based on synchronous and inverter distributed generators, which is a highlight in the field of fault diagnosis.The authors of [10] propose a novel technique for analog circuit fault detection using the application of image recognition, converting the power spectral density of the output signal into a two-dimensional image and inputting it into a deep convolutional neural network to achieve image classification and achieve the purpose of fault detection.Yu et al. [11] conducted research on a novel method for fault detection and diagnosis, which is predicated on a fusion of the firefly algorithm, tent chaos mapping, and extreme learning machine.The efficacy of this approach is exceptional in its ability to generalize well in the realm of fault diagnosis.Liu et al. [12] proposes a fault diagnosis method based on vibration sensor in ellipsoidal-ARTMAP network and differential evolution algorithm. The remarkable diagnostic performance exhibited by many deep learning models is inherently tied to the acquisition of sufficient and uniformly distributed data samples.Nevertheless, in practical applications, the procurement of sufficient fault samples proves to be a formidable task owing to various reasons such as costs and safety concerns.When problems such as a small number of samples or different data distributions under different working conditions occur, the impact of a particular factor on the efficacy and precision of the neural network-based fault diagnosis system will be significant.The deep learning model requires much data support to exert its powerful capabilities in data modeling and classification identification.It also requires sufficient data coverage under different working conditions to ensure the capability of a deep learning model to extend its knowledge learned and evolve to novel situations, which is evaluated in [13].When the limited amount of data is utilized directly for deep learning training, it will lead to extremely serious overfitting, especially not conducive to fault diagnosis.Effective training of deep learning models with small sample datasets requires preprocessing of the data, which is pivotal to achieving optimal training outcomes.The core of small sample fault diagnosis is to resolve the issue of an insufficient number of samples.In the field of fault diagnosis research, using original samples to expand the amount of data or improve data quality is an effective solution.The existing main methods to expand the target sample size are the Synthetic Minority Oversampling Technique (SMOTE), variational autoencoder (VAE) [14], autoregressive model [15], and generative adversarial network (GAN) [16,17].Chawla et al. [18] introduced the SMOTE technique, which increases the quantity of samples available for analysis by finding similar samples near the minority class samples and synthesizing new small samples.However, it should be noted that when the samples are distributed on the edge of the classification, the samples synthesized by the SMOTE method classification boundaries may be blurred.Autoregressive models can perform density estimation on sequence data well, but their computational load is much larger than that of VAE and GAN.The VAE method is a commonly used generative model.Through the hidden features (low-dimensional hidden variables) learned in the input data, these hidden features are sampled and reconstructed, and finally, new data similar to the input data are generated, such as in reference [19], which introduces a variational autoencoder (VAE) into the fault diagnosis, facilitating data augmentation through the generation of vibration signals.Subsequently, an enhanced fault diagnosis method is proposed, wherein a convolutional neural network is combined with the aforementioned approach to realize improved accuracy and performance.Although the VAE method is trained and the training process is relatively stable, the generated pictures are relatively blurred, and the generated samples lack the capability to capture comprehensive fault information.In addition, the authors of [20] designed an adversarial transfer network comprising multiple scales based on an attention mechanism that automatically distinguishes various bearing fault states.The TrAdaBoost method is different from other methods of expanding the target sample size.It uses auxiliary datasets for training and applies them to the training of classifiers through weight adjustment.Xiao et al. [21] used CNN as a classifier, combined with the method of assigning weights to joint training datasets and the weight update of the TrAdaBoost model to achieve accurate fault diagnosis with high precision for small samples.The TrAdaBoost method is flexible and easy to operate when combined with other classifiers but requires a suitable and similar auxiliary dataset, which is difficult to obtain.In addition, classification difficulty increases when there is noise in the dataset itself. In 2014, Goodfellow et al. [22] summarized the characteristics of the generative and discriminative models in machine learning and proposed a creative network model-generated confrontation network (GAN).A novel deep learning approach has emerged that can be utilized for the purpose of data augmentation, data generation, data modeling, and other fields.Traditional GAN uses Jensen-Shannon divergence (JS divergence) to quantify the degree of congruence between false images and authentic images.With the in-depth research on the GAN network, researchers found that the network is prone to problems such as network collapse and gradient explosion during the training process, which makes the GAN network challenging to train.In order to address the aforementioned issues, more and more variant algorithms have been proposed.Least Squares Generative Adversarial Networks (LSGAN) [23] are designed to enhance the objective function of a GAN network, specifically the discriminator, by replacing the cross-entropy loss function with a least squares loss function, making the transfer of gradients more effective and the model's training process more stable.Liu et al. [24] advanced a technique that amalgamates variational autoencoding and GAN to glean precise features of real data samples in the context of data scarcity.Wasserstein GAN [25,26] effectively substitutes the Kullback-Leibler (KL) divergence and Jensen-Shannon (JS) divergence with the Wasserstein distance in order to accurately measure the distance between the generated distribution and the actual distribution and make the gradient calculation of the generator more accurate.The conditional generative adversarial network (CGAN) [27] proposes adding conditional information to GAN's network structure so that GAN can accurately train and generate images of various categories in one model simultaneously and according to different datasets and generation requirements.Similarly, Dixit et al. [28] used CGAN to generate vibration signals of rotating machines.Then, they used auxiliary classifiers as GAN discriminators in these signals and performed meta-learning to improve the generalization of classifiers' capabilities for better fault diagnosis.References [29][30][31] go further based on CGAN by adding classifiers and clusters inside GAN to partially or entirely obtain condition information.Then, they conduct model training similar to CGAN, thus partially or entirely removing the limitations of label information on CGAN training.The authors of [32] propose the use of a temporal generative adversarial network, coupled with an efficient network model, to implement a transfer learning method, displaying excellent levels of effectiveness, reliability, and generalization performance.Deep Regret Analytic Generative Adversarial Networks (DRAGAN) [33] introduce the no-regret algorithm in game theory and transform its loss function to solve the GAN collapse problem.The authors of [34] incorporate a discriminator into its network design to resolve the issue of restricted variety within the generated samples.Rad-ford et al. [35] proposed the introduction of a convolutional neural network in place of the original fully connected network within the architecture of the GAN to develop the deep convolutional generative adversarial network (DCGAN), which significantly improved the generation and generalization capabilities of the GAN network model and provided new ideas for the development of GAN networks.The authors of [36] have employed DCGAN to convert one-dimensional vibration data into grayscale images, thereby increasing the availability of fault data samples and resolving the issue of inadequate data samples. The Transformer model [37] was proposed by Google in 2017.One of the most notable characteristics of this particular model is its departure from network structures typically utilized in RNN and CNN.The Transformer model has gained significant prominence in the realm of machine translation due to its remarkable capabilities.Recently, there has been a growing interest in applying the Transformer model to various fields, such as sequence data prediction, target detection, and image classification, leading to promising outcomes [38][39][40].The Swin Transformer [41], proposed by Microsoft Research Asia, has attained superior performance in multiple vision tasks, indicating its state-of-the-art nature.As the number of network layers increases, the attention mechanism may collapse, resulting in a decrease in Transformer model performance rather than an increase.To address this challenge, Touvron et al. [42] conducted research based on the ViT model and designed a layer scale structure that can also converge when the network is deepened, solving the problem of variance amplification in the residual connection process.A proposed method for fault diagnosis in rotating machinery utilizes intelligent feature self-extraction and transformer neural network techniques [43].The multi-scale SinGAN model [44] is especially adept at generating high-quality kurtosis map images, which can play a pivotal role in effectively training complex machine learning models. The methods proposed in the above literature have shown good learning capabilities and transfer effects in various unsupervised cross-domain fault diagnosis tasks, providing an essential reference for subsequent research in this field.To enhance the precision, stability, and broad applicability of diagnostic assessments, a set of innovative enhancements have been proposed. Contributions: Our approach is made possible by the following technical contributions: The rest of the content is organized as follows: In Section 2, the theoretical background and the proposed method are outlined.Section 3 illustrates the validation of the proposed method by detailing the selection of three experimental circuits and the construction and preparation of the fault dataset.Section 4 delves into the experimental outcomes and analysis.The final section, Section 5, provides a comprehensive summation of the complete paper. Related Theories 2.1. Continuous Wavelet Transform Continuous wavelet transform (CWT) can display the time-frequency characteristics of the vibration signal on the image.For a mother wavelet function, ϕ(t) ∈ L 2 (R), the Fourier transform satisfies the following equation: Sensors 2023, 23, 9125 where ω denotes frequency, and φ(ω) denotes Fourier transform of ϕ(t).Stretch and translate to obtain a family of wavelet functions as follows: where t = at 0 + b, a is the scale factor, b is the displacement factor, and ϕ a,b (t) is the analysis wavelet.The scale factor determines the extent to which the wavelet function can be scaled in the frequency domain, while the displacement factor governs the extent to which the wavelet function can be displaced in the time domain.Combined with the above explanation, the CWT transform definition of any finite energy signal x(t) ∈ L 2 (R) is as follows: where ϕ * ( t−b a ) is the conjugate of ϕ( t−b a ); CWT x (a, b) is the scalar product of signal x(t) and wavelet ϕ a,b (t). Deep Convolutional Generative Adversarial Networks The concept of adversarial training for GAN networks stems from the field of game theory, and its main network model is composed of a set of generators built by deep networks and another set of discriminators built by deep networks.Deep convolutional generative adversarial networks are an innovation in the basic framework of GAN and are particularly suitable for processing two-dimensional data such as images and threedimensional data [45].Like GAN, the game comprises a generator and discriminator, which engage in competition through their respective operations and ultimately reach a state of Nash equilibrium [21]. The basic structure is shown in Figure 1.z is the input random noise, and G is the generator.G(z) represents the generated picture, x is the real data, and D is the discriminator.The random noise (z) is inputted into the generative model, and the model parameters are learned.In the generative model, it is gradually converted into the sample output close to the real data.And the discriminative model is responsible for evaluating whether the samples generated by the generator network are close to the real data.The training process for the generator and discriminator involves training them alternately, with the generator endeavoring to deceive the discriminator and the discriminator endeavoring to discern the difference between the generated samples and the authentic data.This process is maintained until the output of the generator network approximates the real data to a degree where the discriminator network cannot distinguish the samples. Through a consistent process of learning and refining, the generator enhances its capacity to produce sample data.Therefore, the training problem of G can be expressed as a value function V(D, G), which becomes a maximization and minimization problem of V(D, G).This can be expressed as follows: where x is the input of the network comprises real images, D(x) represents the likelihood of real images, z denotes the level of noise present in the image inputted into the generator, G(z) symbolizes a fictitious image generated by the generator, and D(G(z)) signifies the probability of a false image.Equation ( 4) is split into two loss function models.In the architecture of the discriminator model, the loss function is given by the following Equation (5): Through a consistent process of learning and refining, the generator enhances its capacity to produce sample data.Therefore, the training problem of G can be expressed as a value function ( , ) V D G , which becomes a maximization and minimization problem of ( , ) V D G .This can be expressed as follows: where x is the input of the network comprises real images, ( ) D x represents the like- lihood of real images, z denotes the level of noise present in the image inputted into the generator, ( ) G z symbolizes a fictitious image generated by the generator, and ( ( )) D G z signifies the probability of a false image. Equation ( 4) is split into two loss function models.In the architecture of the discriminator model, the loss function is given by the following Equation (5): Among them, the first item on the equation is the real sample's discrimination result, with the closer it is to 1 indicating a better performance.The second term refers to the newly generated sample's discrimination result, which should be closer to 0 to achieve optimal performance. Obtaining the generator loss function follows the same principle as the discriminator loss function and is calculated using Equation ( 6): In contrast to the discriminator, the generator only needs to approach 1 on the generated data.Among them, the first item on the equation is the real sample's discrimination result, with the closer it is to 1 indicating a better performance.The second term refers to the newly generated sample's discrimination result, which should be closer to 0 to achieve optimal performance. Obtaining the generator loss function follows the same principle as the discriminator loss function and is calculated using Equation ( 6): In contrast to the discriminator, the generator only needs to approach 1 on the generated data. The theoretical basis is analyzed as follows: Starting from the objective function, since the value function V(D, G) is continuous, Equation ( 1) is written in calculus form to express the mathematical expectation as Equation ( 7): Assuming that the data generated by the generator G(z) are x, the expressions for the noise point z and the differential dz are calculated as ( 8) and ( 9), respectively: Upon the substitution of Equations ( 7) and ( 8) into Equation ( 9), a resultant expression is generated.(10) Sensors 2023, 23, 9125 7 of 25 Define the generating distribution of the noise input z as P g (x), then where P z denotes the generating distribution, adding noise to the generator, and we obtain the following: Evaluating the maximum value of the function V(G, D), fixing G, and taking the partial derivative of D yields the following: As can be seen from the expression of D * (x), consistency in the distribution of generated virtual sample data and real data is achieved when the latter are deemed congruent and similar; thus, immediately, P data (x) = P g (x).The condition is satisfied under this condition D * (x) = 0.5; at this time, the D network has been unable to judge the truth or falsity of the sample data generated by the G network; that is, D has achieved the optimal solution.If and only then P data (x) = P g (x), the maximization and minimization problem of V(D, G) has a globally optimal solution; that is, it reaches the Nash equilibrium state.It is possible to stop the training procedure at this point since model G has mastered the actual sample distribution, and the accuracy of model D has remained consistent at 50%. Transformer Transformer, a deep model widely used in natural language processing and image analysis, can efficiently process sequence data, capture features by modeling the dependencies inside the sequence, and improve computational efficiency without a loop structure.And its individual components are analyzed next. Attention Mechanism The source of the attention module is to learn from human attention and intuitively explain it by the attention mechanism of human vision.Concretely, machine learning can be viewed as a mathematical computation of data attributes, which takes into account certain weights.These weights are attention.As a member of the attention module, self-attention can be used to calculate the interdependence between different regions inside the image.The common attention mechanism is soft attention (SA).The attention mechanism can be used in any model.This paper presents a novel technique, SA, which is grounded in the encoder-decoder framework.The encoder-decoder framework has gained widespread acceptance in natural language processing and time series prediction.As shown in Figure 2, the input sequence can be represented as [x 1 , x 2 , • • •, x n ] and the output sequence as [y 1 , y 2 , • • •, y m ], where the attention result c i is calculated by the following formula: where n is the input data length of the coding layer; h j is the hidden layer state of the j input data in the coding layer; a ij represents the attention distribution coefficient of the j data in the encoding layer when the i value is output by the decoding layer.a ij is calculated as the following: Sensors 2023, 23, 9125 where H i is the hidden layer state of the i data in the decoding layer, and F is a function.Calculate the similarity of H i and h j , then the output of the function F is normalized by Softmax to obtain the attention. input data in the coding layer; represents the attention distribution coefficient of the j data in the encoding layer when the i value is output by the decoding layer.ij a is calculated as the following: where i H is the hidden layer state of the i data in the decoding layer, and F is a function.Calculate the similarity of The scaled dot product attention used in ViT is shown in Figure 3. Denoted by X ∈ R n×d a sequence of n elements is (x 1 , x 2 , . .., x n ), and d denotes the embedding dimension of each element.And through the interaction between Query and Key-Value pairs, the self-attention layer achieves dynamic aggregation of information, which is obtained as follows: Sensors 2023, 23, x FOR PEER REVIEW 9 of 26 The scaled dot product attention used in ViT is shown in Figure 3. Denoted by , and d denotes the embedding dimension of each element.And through the interaction between Query and Key-Value pairs, the self-attention layer achieves dynamic aggregation of information, which is obtained as follows: , , The calculation of scaling dot product attention proceeds in the following manner: The calculation of scaling dot product attention proceeds in the following manner: where QK T is the attention score, and d z is the dimension of vectors Q and K. √ d z is the scaling factor. Multi-head attention (MHSA) has multiple independent self-attention layers (heads), each with its own learnable weight matrix, and the calculation process is W Q i , W K i , W V i , as shown in Figure 4 and as shown in Equations ( 18)-( 20): where h refers to the number of attention heads of MHSA, Z i represents the output vector of each attention head, W 0 is the output projection matrix, and i is the learnable weight matrix.Q i , K i and V i can be regarded as the split under different feature subspaces in single-head attention, and the correlation between features is extracted from multiple angles without increasing additional computational cost.Finally, the information extracted by each self-attention layer is merged to obtain more rich and more comprehensive feature information.where h refers to the number of attention heads of MHSA, i Z represents the output vector of each attention head, 0 W is the output projection matrix, and , , W W W is the learnable weight matrix. , i i Q K and i V can be regarded as the split under different feature subspaces in single-head attention, and the correlation between features is extracted from multiple angles without increasing additional computational cost.Finally, the information extracted by each self-attention layer is merged to obtain more rich and more comprehensive feature information. Transformer Model The Transformer model, illustrated in Figure 5, consists of an encoding block and a decoding block.The contrast between the decoding layer and the encoding layer lies in the fact that each decoding layer comprises two multi-head attention layers.In the decoding layer, given its two multi-head attention layers, the initial attention layer mirrors that of the decoder layer.Correspondingly, the K and V fields of the second attention layer are the respective outputs of the decoding block.Lastly, the regularization layer furnishes the Q for the attention mechanism of this layer.In addition, the structure of the regularization Transformer Model The Transformer model, illustrated in Figure 5, consists of an encoding block and a decoding block.The contrast between the decoding layer and the encoding layer lies in the fact that each decoding layer comprises two multi-head attention layers.In the decoding layer, given its two multi-head attention layers, the initial attention layer mirrors that of the decoder layer.Correspondingly, the K and V fields of the second attention layer are the respective outputs of the decoding block.Lastly, the regularization layer furnishes the Q for the attention mechanism of this layer.In addition, the structure of the regularization layer in the Transformer is consistent, which is mainly composed of residual connections and regularization operations: where z is the output of the attention or fully connected layer. The Proposed Method The first step involves collecting the real output voltage signal of the analog circuit.After that, the output signal's time-frequency characteristics are extracted using the method of CWT.Then, the DCGAN generation model is trained by using the real-timefrequency feature graph, and a new time-frequency feature graph can be generated by The Proposed Method The first step involves collecting the real output voltage signal of the analog circuit.After that, the output signal's time-frequency characteristics are extracted using the method of CWT.Then, the DCGAN generation model is trained by using the real-time-frequency feature graph, and a new time-frequency feature graph can be generated by inputting the real time-frequency feature graph into the trained DCGAN model.Mixing real time-frequency signals and generated time-frequency signals can be used to train the Transformer model, realize automatic mining and classification of time-frequency features, and finally complete fault diagnosis.The diagram representing the proposed diagnostic strategy's workflow is displayed in Figure 6.The first step is DCGAN training: the generator generates new sample data (fake sample) by constantly transforming random noise pictures conforming to a normal distribution.The discriminator network receives the fake sample and the real sample and determines whether the image data correspond to the output image by calculating the probability of the generator.The generator and discriminator engage in a continuous optimization process through adversarial confrontation until the loss function converges.At that point, the generator model is saved, and additional sample data are generated. The next step involves utilizing the Transformer model to pre-train the newly generated sample data, adjusting the parameters until the model converges and reaches a better evaluation index, and saving the pre-trained model. Thirdly, use a Transformer to conduct iterative training and testing on the original sample data until the training converges.In the event that convergence has not been achieved, the learning rate, as well as the dimensions and amount of convolutional layers, will be adjusted until convergence is attained. Analog Circuit Experiment and Fault Setting This section primarily covers topics related to analog circuit examples, fault configuration and data acquisition, signal preprocessing methods, and diagnostic model parameter settings.These are preparatory steps for the subsequent presentation and analysis of the experimental results in Section 4. In the process of operating analog circuits, the performance of capacitors, resistors, and other components inevitably deteriorates.Deviations in the actual output signal value from the theoretical value may occur due to various factors, resulting in an uneven result.In general, the device has a normal value for its operation X, tolerance α, and threshold β.The device can still work normally under the condition of working within the tolerance range.If the device works beyond the tolerance range but does not exceed the threshold Thirdly, use a Transformer to conduct iterative training and testing on the original sample data until the training converges.In the event that convergence has not been achieved, the learning rate, as well as the dimensions and amount of convolutional layers, will be adjusted until convergence is attained. Analog Circuit Experiment and Fault Setting This section primarily covers topics related to analog circuit examples, fault configuration and data acquisition, signal preprocessing methods, and diagnostic model parameter settings.These are preparatory steps for the subsequent presentation and analysis of the experimental results in Section 4. In the process of operating analog circuits, the performance of capacitors, resistors, and other components inevitably deteriorates.Deviations in the actual output signal value from the theoretical value may occur due to various factors, resulting in an uneven result.In general, the device has a normal value for its operation X, tolerance α, and threshold β.The device can still work normally under the condition of working within the tolerance range.If the device works beyond the tolerance range but does not exceed the threshold range, the circuit has a soft fault.In the event that the device parameters exceed the preestablished threshold, the circuit will cease functioning altogether.To ensure the efficacy of the suggested algorithm, three circuits, including the Sallen-Key bandpass filter circuit, the Biquad low-pass filter circuit, and the Thomas filter circuit, were subjected to experimental verification, as illustrated in the accompanying figure. Fault Settings In the field of circuit fault diagnosis, the selection of faulty components is typically determined based on an analysis of which components have a significant impact on the input.This determination is generally guided by the circuit's transfer function.The specific approach involves conducting sensitivity analysis using circuit simulation software to identify components that have a substantial influence on the circuit's output.These components are then considered as potential candidates for faults, thus laying the groundwork for subsequent research.If only one device in the analog circuit has a soft fault, it is called a single fault, and its fault settings are depicted in Tables 1 and 2. In the event of multiple components experiencing simultaneous failures, it is referred to as a compound fault, wherein the fault signal manifests similar properties to those observed with a single fault.Although the incidence is low, its low feature recognition rate and many fault modes cannot be ignored, and its fault settings are depicted in Table 3.In an analog circuit, the component that has a significant impact on the output of the circuit is referred to as the sensitive element.The Sallen-Key bandpass filter circuit, depicted in Figure 7, exhibits a number of sensitive components, including C1, C2, R2, and R3.The Biquad low-pass filter circuit illustrated in Figure 8 features a number of vulnerable components, including C1, C2, R1, R2, R3, and R4.Lastly, the Thomas filter circuit, shown in Figure 9, possesses numerous sensitive components, including C1, C2, R3, R4, and R5.To conduct an analog circuit fault test, it is recommended to adjust the device parameter upwards (↑) or downwards (↓) by a specified amount that exceeds the standard value.This measure aims to identify any potential issues within the circuit.The tolerance of the resistance and capacitance of the circuit are 5% and 10%, respectively, and the fault threshold is set to 30%.Specifically, the range of resistance failure in the circuit is encompassed within the limits of [70% X, 95% X]∪[105% X, 130% X].Likewise, the range of capacitor failure in the circuit is restricted within the bounds of [70% X, 90% X]∪[110% X, 130% X], where X denotes the nominal value of the component. Fault Settings In the field of circuit fault diagnosis, the selection of faulty components is typically determined based on an analysis of which components have a significant impact on the input.This determination is generally guided by the circuit's transfer function.The specific approach involves conducting sensitivity analysis using circuit simulation software to identify components that have a substantial influence on the circuit's output.These components are then considered as potential candidates for faults, thus laying the groundwork for subsequent research.If only one device in the analog circuit has a soft fault, it is called a single fault, and its fault settings are depicted in Tables 1 and 2. In the event of multiple components experiencing simultaneous failures, it is referred to as a compound fault, wherein the fault signal manifests similar properties to those observed with a single fault.Although the incidence is low, its low feature recognition rate and many fault modes cannot be ignored, and its fault settings are depicted in Table 3.In an analog circuit, the component that has a significant impact on the output of the circuit is referred to as the sensitive element.The Sallen-Key bandpass filter circuit, depicted in Figure 7, exhibits a number of sensitive components, including C1, C2, R2, and R3.The Biquad low-pass filter circuit illustrated in Figure 8 features a number of vulnerable components, including C1, C2, R1, R2, R3, and R4.Lastly, the Thomas filter circuit, shown in Figure 9, possesses numerous sensitive components, including C1, C2, R3, R4, and R5.To conduct an analog circuit fault test, it is recommended to adjust the device parameter upwards (↑) or downwards (↓) by a specified amount that exceeds the standard value.This measure aims to identify any potential issues within the circuit.The tolerance of the resistance and capacitance of the circuit are 5% and 10%, respectively, and the fault threshold is set to 30%.Specifically, the range of resistance failure in the circuit is encompassed within the limits of [70% X, 95% X]∪[105% X, 130% X].Likewise, the range of capacitor failure in the circuit is restricted within the bounds of [70% X, 90% X]∪[110% X, 130% X], where X denotes the nominal value of the component. Signal Preprocessing In order to enhance the training capacity of the model and minimize the number of computations required, it is imperative to undertake the process of data preprocessing prior to its collection.As shown in Figure 10 Signal Preprocessing In order to enhance the training capacity of the model and minimize the number of computations required, it is imperative to undertake the process of data preprocessing prior to its collection.As shown in Figure 10 A. Standardized processing To further standardize the collected data and better train the neural network model, the data of each sample are standardized.The standardization method adopted in this paper is min-max standardization.Its mathematical expression is as follows: B. Spectrogram conversion Consider the Sallen-Key bandpass filter circuit as an example.When continuous wavelet transform processing is applied to the output signal, the resulting graph can be seen in Figure 11. A. Standardized processing To further standardize the collected data and better train the neural network model, the data of each sample are standardized.The standardization method adopted in this paper is min-max standardization.Its mathematical expression is as follows: B. Spectrogram conversion Consider the Sallen-Key bandpass filter circuit as an example.When continuous wavelet transform processing is applied to the output signal, the resulting graph can be seen in Figure 11. Upon reviewing Figure 11, it is evident that the transformations resulting from various fault categories display subtle distinctions in the accompanying images.Therefore, manual classification is difficult, and automatic recognition technology is very necessary to explore the differences between different faults. C. Dataset partitioning A total of 300 samples were collected for each fault type in both single-fault and multi-fault circuit cases.The samples were subjected to the aforementioned pretreatment procedure.A training dataset of 300 samples was assembled for each fault type, as the size of the data was determined based on a combination of real samples and generated samples with varying mixing ratios.A crucial aspect of the training process is to establish the ratio of real samples to generated samples in order to optimize the performance of the fault diagnosis model.Upon reviewing Figure 11, it is evident that the transformations resulting from various fault categories display subtle distinctions in the accompanying images.Therefore, manual classification is difficult, and automatic recognition technology is very necessary to explore the differences between different faults. C. Dataset partitioning A total of 300 samples were collected for each fault type in both single-fault and multi-fault circuit cases.The samples were subjected to the aforementioned pretreatment procedure.A training dataset of 300 samples was assembled for each fault type, as the size of the data was determined based on a combination of real samples and generated samples with varying mixing ratios.A crucial aspect of the training process is to establish the ratio of real samples to generated samples in order to optimize the performance of the fault diagnosis model. Fault Signal Collection The experimental platform depicted in Figure 12 demonstrates a viable method for data acquisition.The equipment utilized incorporates an Agilent 33250 arbitrary waveform generator, Agilent 54853 digital oscilloscope, power supply, LabVIEW_2023 Q1 software, National Instruments 1042q data acquisition module, and the circuit under test (CUT).The arbitrary waveform generator creates a sinusoidal scanning excitation signal, which is then recorded using LabVIEW software.The test circuit's output signal is then collected through the input of the sinusoidal excitation signal under various faults.Since the output signal is periodic, data analysis will involve collecting the signal for 50 cycles and dividing it into 50 samples.Specifically, to obtain accurate data, the first 10ms of the output signal will be collected with the sampling frequency set at 60 kHz and the sampling point set at 600.Each fault type and non-fault state will be assigned 500 labeled samples.Subsequently, the samples will be normalized and divided evenly into five separate subsets for five-fold cross-validation. Parameter Settings Involved in the Proposed Diagnostic Strategy As depicted in Figure 13, a pragmatic, experimental platform has been developed for data acquisition.It encompasses an Agilent 54853 digital oscilloscope, a circuit under test (CUT), LabVIEW software, an Agilent 33250 arbitrary waveform generator, a National Instruments 1042q data acquisition module, and a power supply. The experimental platform depicted in Figure 12 demonstrates a viable method for data acquisition.The equipment utilized incorporates an Agilent 33250 arbitrary waveform generator, Agilent 54853 digital oscilloscope, power supply, LabVIEW_2023 Q1 software, National Instruments 1042q data acquisition module, and the circuit under test (CUT).The arbitrary waveform generator creates a sinusoidal scanning excitation signal, which is then recorded using LabVIEW software.The test circuit's output signal is then collected through the input of the sinusoidal excitation signal under various faults.Since the output signal is periodic, data analysis will involve collecting the signal for 50 cycles and dividing it into 50 samples.Specifically, to obtain accurate data, the first 10ms of the output signal will be collected with the sampling frequency set at 60 kHz and the sampling point set at 600.Each fault type and non-fault state will be assigned 500 labeled samples.Subsequently, the samples will be normalized and divided evenly into five separate subsets for five-fold cross-validation. Parameter Settings Involved in the Proposed Diagnostic Strategy As depicted in Figure 13 The batch size is 64, the learning rate is 0.001, and the model is trained for 50 epochs using the Leaky ReLU activation function.The Adam optimizer is used with a β1 value of 0.5. In the context of the Transformer diagnostic model, the parameter configuration used in the experiment is as follows: a patch size of 4 × 4, 2 encoder layers, a token dimension of 96, and 12 attention heads.The maximum number of iterations that can occur during the training phase of the model has been set to 50. Result Analysis and Discussion In Section 3, after identifying the fault types for each analog circuit test case, data were collected, and model parameters were set.Subsequently, fault diagnosis tests were conducted on the three circuits to validate the effectiveness of the proposed data genera- The DCGAN data generation model comprises a generator and a discriminator, both of which are components of the DCGAN architecture. The DCGAN generator uses five transposed convolutional layers to upsample a 100dimensional noise vector.The final output is a 64 × 64 × 1 image.Each convolutional layer, when transposed, has a stride of 2 and a 4 × 4 kernel size.The resultant output is remodeled in order to conform to a predetermined shape, subsequently being inputted into a fully interconnected layer, in which the ensuing output value is utilized as the basis for the discriminator's decision. The batch size is 64, the learning rate is 0.001, and the model is trained for 50 epochs using the Leaky ReLU activation function.The Adam optimizer is used with a β1 value of 0.5. In the context of the Transformer diagnostic model, the parameter configuration used in the experiment is as follows: a patch size of 4 × 4, 2 encoder layers, a token dimension of 96, and 12 attention heads.The maximum number of iterations that can occur during the training phase of the model has been set to 50. Result Analysis and Discussion In Section 3, after identifying the fault types for each analog circuit test case, data were collected, and model parameters were set.Subsequently, fault diagnosis tests were conducted on the three circuits to validate the effectiveness of the proposed data generation and fault diagnosis methods.This section will primarily focus on presenting and analyzing the experimental results. Diagnostic Results of the Proposed Method After the fault is determined, the data are collected, and the model parameters are set, subsequent fault diagnosis tests are conducted on three circuits to validate the efficacy of the proposed data generation and fault diagnosis methodologies.Specifically, five distinct tests were conducted for the single fault and compound fault configurations of the aforementioned circuits to prevent the occurrence of arbitrary test outcomes.The sample size for each fault category in each test instance was set at 50.In addition, it is essential to note that in the training process of the Transformer for fault diagnosis, the actual sample size and the generated sample size were set at 150, respectively.Table 4 shows the specific classification results of each fault category of CUT1 single fault in a single test.For a single CUT1 circuit fault case, three indicators were calculated for each fault category, namely Precision, Recall, and Specificity.Among them, the Precision index measures the ratio of accurately predicted positive samples, while the Recall index evaluates the proportion of positively predicted samples, indicating the degree to which positive samples are identified.The Specificity index refers to the ratio of correctly identified samples out of all negative samples.As is evident from Table 4, for the fault-free category F0, all three index values are all 1, indicating that all samples of this category are correctly identified.According to the values of the three indexes, F1 and F7 fault categories have the most serious misclassification.In addition, the confusion matrix of the single fault diagnosis result of CUT1, CUT2, and CUT3 and the composite fault diagnosis result of CUT1 is shown in Figure 14.As is evident from Table 4, for the fault-free category F0, all three index values are all 1, indicating that all samples of this category are correctly identified.According to the values of the three indexes, F1 and F7 fault categories have the most serious misclassification.In addition, the confusion matrix of the single fault diagnosis result of CUT1, CUT2, and CUT3 and the composite fault diagnosis result of CUT1 is shown in Figure 14.As displayed in Figure 14, in the three cases of single fault diagnosis, among the 450 test samples of CUT1 shown in Figure 14a, all the normal samples were correctly classified, and a total of 11 fault samples were misclassified.In the test samples of CUT2, as displayed in Figure 14b, all 13 samples were mismarked.A total of 16 samples were mismarked in CUT3, as shown in Figure 14c.As shown in Figure 14d, a total of 13 samples of CUT1 compound fault were misclassified.In the figure, the overall classification accuracy of these four cases has reached more than 96%.It can be observed that the proffered methodology has exhibited a commendable classification efficacy, both in instances of single faults and compound faults. Comparison Experiments To analyze the superiority of the proffered scheme, a comparative experiment is set up in this paper.A total of four comparison methods were set.The first comparison method involves transforming the time domain signal generated by the circuit output into a two-dimensional signal with the aim of examining and diagnosing fault signals from a different perspective.As a comparison with CWT, the subsequent DCGAN generation module and Transformer feature mining and classification module remain unchanged.This comparison method is called TGT for short.The second comparison method is to assess the merits of the proposed data generation method; it would be viable to compare the modified version with the original GAN by utilizing the DCGAN data generation module.This comparison method is called CGT for short.In addition, to examine the benefits of incorporating Transformer modules, it is necessary to analyze their various advantages so that the signal mining modules of two-dimensional CNN and ResNet 35, which have advantages in mining two-dimensional signals, are selected for comparison with the Transformer module.The third and fourth comparison methods are referred to as CGC and CGR, respectively. It is imperative to evaluate the effectiveness of the proposed approach and compare it with the four benchmark techniques in order to assess its performance in the given task.And three cases were tested, namely CUT1 single fault, CUT2 single fault, and CUT3 compound fault.First, five tests were carried out in each of the three cases, and the accuracy of each test is shown in the bar chart in Figure 15.In addition, using the three test cases mentioned above as examples, Figure 16 displays box plots for the proposed method and four comparison methods across five experiments.As displayed in Figure 15, for the three test cases, the performance of the proposed method is the best compared with the four comparison methods.Taking CUT1 single fault as an example, the proposed method demonstrates an average accuracy of 97.8% in five distinct tests, attaining a remarkable level of performance, while the diagnostic accuracy of the three comparison methods is 86.2 (TGT), 94.2 (CGT), 95.2 (CGC), and 96.1 (CGR), respectively.In addition, for the three test cases as a whole, among the four comparison methods, comparison method 4 has the best performance, followed by comparison method 3 and 2, while comparison method 1 has the worst performance.The feature extraction of the circuit output signal through wavelet transformation exhibits a notable positive influence on the subsequent classification process.DCGAN produces samples that are more closely similar to real data, surpassing GAN in accuracy.Transformer is better at image feature mining and classification than 2D CNN and ResNet 35. In addition, using the three test cases mentioned above as examples, Figure 16 displays box plots for the proposed method and four comparison methods across five experiments.As shown in Figure 16, the diagnostic accuracy of the proposed method is superior to that of the four comparison methods for the three test cases.In addition, in terms of the distance from 1/4 quantile to 3/4 quantile, in the three test cases, compared to the four evaluated methods, the proposed approach appears to be more compact.In the comparison methods, the first comparison method has the largest distance from the first quartile to the third quartile, while the other three comparison methods are relatively closer.As shown in Figure 16, the diagnostic accuracy of the proposed method is superior to that of the four comparison methods for the three test cases.In addition, in terms of the distance from 1/4 quantile to 3/4 quantile, in the three test cases, compared to the four evaluated methods, the proposed approach appears to be more compact.In the comparison methods, the first comparison method has the largest distance from the first quartile to the third quartile, while the other three comparison methods are relatively closer.Therefore, it is evident that the proposed method displays stable and robust performance, while the robustness of the comparison method 1 is the worst. Small Sample Dataset Testing For the purpose of evaluating the impact of a small sample size on the data quality generated by the DCGAN model and then determining the appropriate sample size suitable for the generated data, a total of five different training sets were set up, and the number of real and generated samples is diverse in each combination, but for each fault class in any circuit case, the total number of training set samples is 300.In this paper, the CUT2 compound fault is used as an illustrative example, and the specific settings of the dataset are shown in Table 5, which also gives the average accuracy results of five tests of the mentioned method.Table 5 reveals a rising trend in the average accuracy with an augmented number of real samples in the training set and a reduced number of artificially generated samples.Given the quantities of real samples and artificially generated samples, which are both 150, the average attained level of precision is 96.65%.However, moving forward in the composition of this training dataset, the rate of improvement in diagnostic performance of the proposed algorithm noticeably decreases.Therefore, considering the difficulty of obtaining the real sample size, it is plausible to assume that the proposed technique has achieved satisfactory diagnostic performance when the number of both real and generated samples reaches 150. Research Outcomes and Limitations of the Proposed Method In this study, the problem of overfitting easily caused by insufficient data in analog circuits is solved, and automatic feature extraction, sample expansion, and high-precision fault classification can be realized in the CWT-DCGAN-Transformer model.How to automatically optimize hyper-parameters such as the learning rate and network structure of the model is still insufficient. Conclusions To address the challenge of limited fault samples in electronic equipment, this paper has developed a small-sample fault diagnosis approach for simulated circuits based on CWT-DCGAN-Transformer, achieving two significant breakthroughs. (i) In scenarios with a limited number of real fault samples, the proposed approach demonstrates satisfactory performance.For instance, in cases where the real sample size is only 150, the diagnostic accuracy of a test case exceeds 96%; (ii) The proposed approach eliminates the need for the manual design of fault features.Through the joint processing of continuous wavelet transformation and the Transformer model, it autonomously explores time-frequency fault features and achieves self-attention and classification of these features.This leads to efficient diagnostic performance. On one hand, this work offers a solution to the challenging problem of small-sample fault diagnosis in practical engineering.On the other hand, it explores the potential of Transformer models in the field of fault diagnosis.In the future, it is of paramount importance to explore how to automatically optimize the different modules and networks within the proposed approach. i H and j h , then the output of the function F is normalized by Softmax to obtain the attention. Sensors 2023 , 26 Multi 23, x FOR PEER REVIEW 11 of Figure 6 . Figure 6.Flowchart of the proposed analog circuit small sample fault diagnosis scheme.The model's training procedure entails the following sequence of steps:The first step is DCGAN training: the generator generates new sample data (fake sample) by constantly transforming random noise pictures conforming to a normal distribution.The discriminator network receives the fake sample and the real sample and determines whether the image data correspond to the output image by calculating the probability of the generator.The generator and discriminator engage in a continuous optimization process through adversarial confrontation until the loss function converges.At that point, the generator model is saved, and additional sample data are generated.The next step involves utilizing the Transformer model to pre-train the newly generated sample data, adjusting the parameters until the model converges and reaches a better evaluation index, and saving the pre-trained model.Thirdly, use a Transformer to conduct iterative training and testing on the original sample data until the training converges.In the event that convergence has not been achieved, the learning rate, as well as the dimensions and amount of convolutional layers, will be adjusted until convergence is attained. Figure 6 . Figure 6.Flowchart of the proposed analog circuit small sample fault diagnosis scheme.The model's training procedure entails the following sequence of steps: The first step is DCGAN training: the generator generates new sample data (fake sample) by constantly transforming random noise pictures conforming to a normal distribution.The discriminator network receives the fake sample and the real sample and determines whether the image data correspond to the output image by calculating the probability of the generator.The generator and discriminator engage in a continuous optimization process through adversarial confrontation until the loss function converges.At that point, the generator model is saved, and additional sample data are generated.The next step involves utilizing the Transformer model to pre-train the newly generated sample data, adjusting the parameters until the model converges and reaches a better evaluation index, and saving the pre-trained model.Thirdly, use a Transformer to conduct iterative training and testing on the original sample data until the training converges.In the event that convergence has not been achieved, the learning rate, as well as the dimensions and amount of convolutional layers, will be adjusted until convergence is attained. Figure 9 . Figure 9.The circuit schematic of Thomas filter circuit. , the output voltage signals of the Sallen-Key bandpass filter circuit and the Thomas filter circuit under partial fault conditions are shown. Figure 10 . Figure 10.The response of analog circuits under different fault states.(a) Sallen-Key bandpass filter circuit.(b) Thomas filter circuit. Figure 10 . Figure 10.The response of analog circuits under different fault states.(a) Sallen-Key bandpass filter circuit.(b) Thomas filter circuit. , a pragmatic, experimental platform has been developed for data acquisition.It encompasses an Agilent 54853 digital oscilloscope, a circuit under test (CUT), LabVIEW software, an Agilent 33250 arbitrary waveform generator, a National Instruments 1042q data acquisition module, and a power supply.The DCGAN data generation model comprises a generator and a discriminator, both of which are components of the DCGAN architecture.The DCGAN generator uses five transposed convolutional layers to upsample a 100dimensional noise vector.The final output is a 64 × 64 × 1 image.Each convolutional layer, when transposed, has a stride of 2 and a 4 × 4 kernel size.The output dimensions of each transposed convolutional layer are as follows: 1024 × 4 × 4, 512 × 8 × 8, 256 × 16 × 16, 128 × 32 × 32, 1 × 64 × 64.The architecture of the DCGAN discriminator mirrors that of the generator in a symmetric manner.It takes 64 × 64 × 1 images as input and uses five convolutional layers for feature extraction, with a stride of 2 and a kernel size of 4 × 4. The output dimensions of each convolutional layer are as follows: 1 × 64 × 64, 128 × 32 × 32, 256 × 16 × 16, 512 × 8 × 8, 1024 × 4 × 4.The resultant output is remodeled in order to conform to a predetermined shape, subsequently being inputted into a fully interconnected layer, in which the ensuing output value is utilized as the basis for the discriminator's decision. Figure 14 . Figure 14.Confusion matrix of diagnostic results of the proposed method in four cases.(a) CUT1 single fault case.(b) CUT2 single fault.(c) CUT3 single fault.(d) CUT1 compound fault. Figure 15 . Figure 15.The diagnostic results of the proposed method and the four comparison methods in 5 tests.(a) CUT1 single fault; (b) CUT2 single fault; (c) CUT3 compound fault. Figure 15 . Figure 15.The diagnostic results of the proposed method and the four comparison methods in 5 tests.(a) CUT1 single fault; (b) CUT2 single fault; (c) CUT3 compound fault. Figure 16 . Figure 16.Box diagram of diagnosis results of the proposed method and the four comparison methods.(a) CUT1 single fault.(b) CUT2 single fault.(c) CUT3 compound fault. Figure 16 . Figure 16.Box diagram of diagnosis results of the proposed method and the four comparison methods.(a) CUT1 single fault.(b) CUT2 single fault.(c) CUT3 compound fault. Table 2 . Single fault of Thomas filter circuit. Table 3 . Compound faults of Thomas filter circuit. Table 4 . Fault diagnosis results of CUT1 single fault test. Table 5 . Comparison results of the impact of different training set compositions on the diagnosis of the proposed method.
12,985
2023-11-01T00:00:00.000
[ "Engineering", "Computer Science" ]
Numerical Simulation of Fracturing in Coals Using Water and Supercritical Carbon Dioxide with Potential-Based Cohesive Zone Models The Park-Paulino-Roesler (PPR) cohesive zone model (CZM) for coal was established for analyzing mixed-mode I/II fractures using semicircular specimens under punch-through shear (PTS) and three-point bending (SCB) tests. In these methods, the main parameters of the fracture were obtained through SCB tests and PTS tests. And according to the experimental results, the coal specimens show obvious characteristics of ductile fracture under mode I and II loading. Moreover, hydraulic and supercritical carbon dioxide (ScCO2) fracture tests were conducted, and accordingly, it was found that the crack initiation pressure of coal specimens for hydraulic fracturing is 17.76MPa, about 1.59 times that driven by ScCO2. And the crack initiation time of coal with ScCO2 fracturing is 123.73 s, which is 1.58 times that for hydraulic fracturing. A macrocrack eventually formed in the coal specimen due to the hydraulic drive, which penetrated through the entire specimen. Yet, there was no crack penetrating the whole fracture specimen and several widely distributed secondary cracks in the fractured coal specimens by ScCO2. Furthermore, zero-thickness pore pressure cohesive elements were utilized to investigate multicrack propagation in coals undergoing hydraulic and ScCO2 fracturing. The constitutive relationships of the established PPR CZM were introduced into the cohesive elements. The obtained results are consistent with the hydraulic and ScCO2 fracturing experiment results for the coal specimens. This indicates that the established PPR CZMs can accurately represent the crack propagation behavior in coals for hydraulic and ScCO2 fracturing. Introduction As an essential type of clean energy, the exploitation of coalbed methane (CBM) is of significant importance to increase the supply of clean energy, thereby decreasing concerns about greenhouse gases and realizing safe coal mining [1,2]. Studies show that the low permeability of coal seams is one of the main challenges for the efficient exploitation of CBM. Generally, coal permeability in Chinese mines is less than 1 mD [3], which is much lower than that in the United States, Australia, and other countries. In this regard, hydraulic fracturing is a widely adopted technology to improve the CBM permeability and, therefore, production by injecting a large volume of water-based fluid to create and extend fracture networks [4,5]. Hence, in the process of hydraulic fracturing, the crack propagation behavior in coals will have a direct influence on the effect of CBM exploitation. However, there are some drawbacks to the hydraulic fracturing technology; for example, it will cause a lot of waste and pollution of water resources, and the fracturing fluid will cause "water sensitive" and "water lock" influence on coalbed methane reservoir [6]. In order to resolve these shortcomings, numerous nonaqueous fracturing technologies have been proposed [7,8], among which the supercritical carbon dioxide (ScCO 2 ) fracturing technology has attracted much attention [9]. ScCO 2 refers to a special state of CO 2 when its temperature and pressure exceed 31.1°C and 7.38 MPa, respectively, which has unique physical and chemical characteristics, including low viscosity, high diffusion coefficient, and high density [10]. Some researchers have conducted experiments on rocks with ScCO 2 fracturing, and the results show that ScCO 2 fracturing can produce more widely distributed and complex fracture networks in rocks than hydraulic fracturing, which can significantly increase reservoir permeability [11][12][13]. In addition, ScCO 2 also has a good displacement effect on methane adsorbed in the coal seam, which will be beneficial to improve the yield of coalbed methane [14]. Therefore, ScCO 2 fracturing technology can promote the efficient exploitation of coalbed methane, and the most important thing is the research on the crack propagation law of coal driven by ScCO 2 . Since the groundbreaking work of Irwin [15] and Griffith [16], linear elastic fracture mechanics (LEFM) was established, becoming a highly effective theory framework for analyzing crack propagation in brittle solid materials. Bieniawski [17,18] systematically introduced the LEFM into the research of crack propagation behavior in rocks, and since then, rock fracture mechanics has been widely used in rock materials. Generally, the fracture toughness (K c ) is applied as an indicator, reflecting the crack propagation in natural materials [19][20][21]. Nevertheless, LEFM is mostly limited to investigating crack propagation in brittle rocks. Yet, some soft rocks, such as coal, exhibit generally ductile failure behaviors, represented by an obvious strainsoftening stage after the peak stress when the crack initiates [22,23]. It is because of the fracture process zone (FPZ) [24] of these soft rocks, i.e., the particular region in front of the crack tip, where a series of nonlinear softening behaviors including microcrack initiation, and plastic strain and mineral crystal friction occurrence, are sizable. It is also nonnegligible relative to the size of the rock specimen and the size of the crack. In comparison to brittle rocks, abundant primary pores and microfissures exist in the coal body [25], causing the ductile fracture characteristics of coals to be more prominent. Thus, the theory of LEFM does not apply to the study of the fracture behavior of coals. The CZM inspired by the studies of Barenblatt [26], Dugdale [27], and Hillerborg et al. [28] has been used with success to represent the crack propagation behavior in nonlinear FPZ of ductile materials. In this theory, the FPZ is simplified hypothetically to a discrete line or plane corresponding to either a two-dimensional or three-dimensional case, respectively, in which the hypothetical cohesive stress causes the virtual crack to close (see Figure 1). The constitutive relation of CZM is represented by the relationship between the cohesive stress and relative displacement across this line or plane. The above constitutive relation is usually nonlinear and depends on the form of stress and the evolution characteristics of damage variables. When the material in this region is completely damaged, the cohesive stress will be lost, which means that a new macrocrack surface is generated. The energy consumed in this damaging process is the fracture energy of the material. Accordingly, it is concluded that the cohesive crack model can effectively characterize the ductile fracture behavior of coal. Based on CZM, the numerical crack propagation model for hydraulic fracturing in a rock is established by the finite element method [29], extended finite element method [30], etc. However, the constitutive relationship of the softening curve has a huge impact on fracture behaviors [31,32], and the linear or bilinear constitutive relationships of CZMs have been adopted in previous numerical models. Hence, it is necessary to establish the CZM of coals to provide an accurate numerical model to predict crack propagation. Mixed-mode I/II crack propagation is prone to occurrence in coals under engineering conditions, especially in supercritical carbon dioxide fracturing [33]. Reviewing the literature indicates that since the LEFM method does not reflect ductile fractures, it is the most widely used scheme to investigate the crack propagation in coal [34,35]. On the other hand, cohesive interactions between fractured surfaces are the main failure mechanisms in the mixed-mode I/II CZM. It should be indicated that these interactions can be expressed through stress-strain equations in fractured surfaces. Nonpotential-based models [36][37][38] were established to characterize the ductile fracture behavior of materials. Considering symmetric systems in cohesive interactions, these models can be simply developed. Nevertheless, the main drawback of the nonpotentialbased model is that one model cannot explain all possible separations in ductile materials. Furthermore, asymmetric tangential stiffness of material increases the computational expenses. An effective solution for this problem is to apply potential-based models to utilize the initial derivative of the fracture potential energy function [39]. This scheme is based on the cohesive stress over the fractured surfaces, while the second derivative reflects the constitutive association. Based on potential-based models, Park et al. [40] proposed the Park-Paulino-Roesler (PPR) model to simulate the cohesive fracture [41,42]. In this model, fracture energy (including modes I and II) and different initial slopes and cohesive strengths are considered. Meanwhile, corrective variables are defined to cover a wide range of failures in different ductile materials. This model resolves the disadvantages of traditional potential-based models. In this work, we performed semicircular specimens under PTS and SCB tests to calculate the fracture parameters Figure 2, coal samples were prepared into semidisk-like specimens, and an artificial crack was prefabricated along the symmetric center starting from the center of the bottom edge of the specimen, and vertical loads were applied on the top of the arc to cause mode I fracture of the specimen. The diameter (2R) and thickness (B) of the coal rock SCB specimen were set as 70 mm and 25 mm, respectively. The ratio (α/R) of the preset crack length to the specimen radius was set as 0.35, and the ratio (S/R) of the base supporting roller span to the specimen diameter was 0.5. The loading mode is displacement control, and the loading speed is 0.02 mm/min. In addition, the crack tip opening displacement (CTOD) of the specimen was measured by the fiber grating (FBG) technique (with an accuracy of 0.5 microstrains) throughout the experiment. In this study, three groups of effective SCB tests were conducted on the coal specimens. In this regard, Figure 3 illustrates the load-CTOD curve of coal SCB specimens during the whole experiment, and according to the various characteristics of the experimental curve, the experimental process is generally divided into four steps, including the compaction, elastic deformation, peak load stage, and postpeak damage stages. When crack initiation occurs in coal specimens, the accumulated energy is not released instantaneously, and there is a nonlinear damage process in the postpeak loading stage. The CTOD of the three coal SCB specimens increased by 0.1192 mm, 0.1153 mm, and 0.0895 mm, respectively, with an average value of 0.108 mm, from the beginning of the specimen subjected to the force to the formation of a new crack surface, that is, from the intact specimen to the fracture of the specimen. PTS Test. This method was first proposed by Backers et al. [43] to investigate the fracture of materials in mode II loading conditions. The PTS test was used here to simulate mode II fracture experiments in coal, as it is easy to process, and the experimental results are reliable. As shown in Figure 4, the specimen was a circular cylinder with a diameter D and had circular notches with a diameter ID drilled into the upper and lower end faces along the central axis of the cylinder. Circular cylinders with diameter D and height L were prepared. Moreover, two circular notches were prepared with depths a and b near the upper end and lower end of the specimen, respectively. The width of notches was set to t, and the effective shear length was IP = L-a-b. The experiment was performed using a loading cylinder to apply a vertical shear force (P). It is found that as the applied shear stress increases, the crack propagates along the notches parallel to the axis of the cylinder, as well as mode II fracture characteristics which were then able to be acquired from the experimental results. The diameter D and height L of the specimens was set to 50 mm, and the coals were cut into the PTS specimens using diamond wire cutting under the CNC machine tools. This can limit the micromechanical damage to the coal specimens and improves machining accuracy. In addition, notches with a diameter of ID = 35 mm were prefabricated along the same central axis utilizing a 0.5-millimeter-thick diamond bit with the CNC machine tools. Parameters a and b of these notches were set to 10 mm and 30 mm, respectively. Moreover, the Figure 2: Coal SCB specimen [20]. 3 Geofluids length of IP was set to 10 mm. Over the experiment, the specimen of coal PTS was located in advance on the bottom support, which possesses cylindrical grooves with a diameter of 35 mm and a depth of 15 mm. A shear load was employed to the specimen of coal using a loading cylinder with a diameter of 35 mm. Finally, the test was in the mode of displacement control with a constant rate of 0.02 mm/min to ensure stable crack propagation. Three experiments were executed each for the coal. The stressstrain curves were recorded. Figure 5 shows the experimental curve of coal shear load and tangential displacement, which represents the typical coal type II fracture characteristics. In the initial stage, the shear load has a linear correlation with the shear displacement. When the critical value was obtained for the shear displacement, mode II cracks begin to occur in the FPZ of the coal sample and local damage occurs. Within the postpeak stage, the shear load progressively reduces with the enhancement of shear displacement, and the coal sample presents the characteristics of ductile fracture. The average maximum tangential displacement of the PTS coal specimen is 0.055 mm, and the nonlinear damage softening stage appears in the postpeak stage of the shear process of the PTS coal specimen. In addition, by calculating the ratio of 4 Geofluids the peak shear load to the effective shear area, the shear strength (τ t ) of coal can be obtained directly. The calculation formula is as follows: where P scr is the peak shear load. According to the above formula, the shear strength of the coal sample is 2.36 MPa. PPR CZM for Coals In the PPR model, the normal and tangential cohesive interactions (T n , T t ) are functions of the normal or tangential separation (Δ n , Δ t ), respectively. It should be indicated that T n approaches zero when Δ t reaches the tangential conjugate final crack opening displacement ( δ t ) or Δ n reaches the maximum normal crack opening width (δ n ). This represents complete normal failure. Similarly, when Δ t reaches its maximum displacement of tangential crack opening (δ t ) or Δ n attains the normal conjugate final crack opening width ( δ n ), full tangential failure takes place. The expressions are as follows: When Δ n ðΔ t Þ reaches the critical opening width δ nc ðδ tc Þ, the value of T n ðT t Þ is the maximum normal cohesive strength (σ max ). This is shown as follows: The mode I and mode II fracture energy (Φ n , Φ t ) can be calculated by the area underneath the cohesive interactions, as follows: In this study, the mode I and II fracture energies of coal samples were calculated, respectively, by the unit integral area under the load-relative opening displacement curves in Figures 3 and 5 of the above two kinds of tests. The specific shape of the softening response, i.e., the constitutive relationship of the softening process, remarkably affects the crack propagation. Therefore, nondimensional shape parameter indices (α, β) are introduced into the PPR model. When the shape parameter indices are equal to 2, the gradient of the softening process represents a nearly linear relationship. If the shape criteria are lower than 2, the cohesive interactions have concave softening trends. Conversely, if the shape indi-ces are higher than 2, the gradient of the softening procedure demonstrates a convex shape. Considering the foregoing macroscopic fracture criteria and boundary conditions, the potential energy function can then be mathematically expressed in the form below: The cohesive interactions T n and T t are obtained by taking the first derivative of the PPR model along with the normal vector and tangential vector, respectively, as follows: where <· > is the Macaulay bracket function, whose calculation is as follows: where m and n are the nondimensional exponents, which are determined by the shape parameter indices (α, β) and the boundary conditions of the critical separations (Equations (4) and (5)). m and n are determined by where λ n and λ t are the initial slope indicators, i.e., the ratio of the critical crack opening displacement to the maximum crack separation displacement, as determined by where Γ n and Γ t are considered energy constants, which are functions of mode I and II fracture energy (Φ n , Φ t ). When Φ n is different from Φ t , the formulas of the energy constants are as follows: When the values between Φ n and Φ t are equal, the simplification of energy constants to the following expression is possible: The unknown parameters of the PPR model needed to establish the PPR CZMs for the coals can be calculated based upon data acquired from the SCB tests and PTS tests. The maximum normal and tangential crack opening width can be determined by considering the boundary conditions of the cohesive strength (4) and (5) and fracture energy (6). The equations are as follows: where the parameters of δ n , δ t , Φ n , Φ t , σ max , and τ max have already been determined. The initial slope indicators λ n and λ t can be calculated by Equation (12). The calculation results of the different coals are listed in Table 1. Finally, substituting Equation (11) into Equations (15) and (16), the nondimensional shape parameter indices (α, β) could be ascertained by solving the above equations. The values of α ðβÞ for the three different coals are shown in Table 1 as well. Based on the experimental results, Φ n is different from Φ t for the coals; hence, the energy constants Γ n and Γ t were calculated using Equation (12). The results are listed in Table 1. In addition, in order to determine the cohesive interaction region, the last displacements of conjugate crack opening δ t and δ n can be calculated by Equations (14) and (15). The cohesive interaction region of mode I is (0.108, 0.009), and the cohesive interaction region of mode II is (0.108, 0.055). When the normal or tangential separation displacement (Δ n , Δ t ) exceeded the region, the cohesive stresses of mode I and mode II were set to zero. Crack Propagation Experiment of Coals for Hydraulic and ScCO 2 Fracturing 4.1. Experimental Preparation and Process. In this section, trimmed samples with a diameter of 50 mm and a length of 100 mm were used in the experiment. Figure 6 illustrates the configuration of samples, indicating that there is a central borehole in the upper surface of the specimen. Then, a 3 mm steel pipe was inserted into the borehole to inject the fluid to simulate the fracturing well. The schematic diagram of the hydraulic and ScCO 2 fracturing experimental device is shown in Figure 7. In order to perform the fracturing tests, 6 Geofluids the prepared specimens were placed in a pressurized kettle. In order to prevent damage to the specimens, the pressure increment rate was set to 1 MPa min −1 . Meanwhile, σ a and σ c were set to 10 MPa and 8 MPa, respectively. In addition, the temperature of the triaxial pressure kettle was set at 40°C and maintained for 3 hours prior to the fracturing test to ensure that the coal sample was fully heated, and the injection flow of the fracturing fluid was set to 20 mL min −1 . When ScCO 2 fracturing is performed, the steel injection pipe was heated to 40°C in advance so that the temperature of carbon dioxide is above its critical temperature (31.1°C). Sample breakdown was identified to have occurred once the fracturing fluid pressure reduces suddenly and simultaneously; the confining pressure increases abruptly. Hydraulic and ScCO 2 Fracturing Experimental Results. Two samples were run for each experimental condition. Figure 8 shows the four fluid pressure-time curves of coal specimens by the two kinds of fluid fracturing. The fracturing process of all coal specimens can be divided into three stages. The first stage consists of fluid pressure rising. Fracturing fluid is continuously injected into the fracture specimen by a high-pressure pump to resist the strength of the specimen under confining pressure. At the beginning of this stage, the growth rate of the fluid pressure in each coal specimen is very low, especially for ScCO 2 fracturing, because it takes time to fill the anhydrite section of the coal specimen after the fracturing fluid injection. In addition, fracturing fluid injection into the coal body immerses and infiltrates the specimen. In this early stage, the slow increase in fluid pressure becomes significant for ScCO 2 fracturing; this is attributed to the relatively developed fracture structure in the center hole of the coal body, and the infiltration effect of ScCO 2 is noticeable than water. The second stage consists of crack initiations. When the injected fluid pressure reaches a specific critical value, the critical condition for the crack propagation of the coal specimen is reached, and the coal specimen ruptures. The critical pressure is called the crack initiation pressure of fracturing, and the critical fracturing time it takes to reach the crack initiation pressure is called the crack initiation time. The two kinds of fluid fracturing results of the coal specimens are shown in Table 2. The average crack initiation pressure of coal specimens for hydraulic fracturing is 17.76 MPa, about 1.59 times that driven by ScCO 2 fracturing. The third stage is the pressure drop stage; when fracturing occurs in the coal sample, the water pressure accordingly decreases. For hydraulic fracturing in coals, the fluid pressure obviously decreases after fracturing. And there is a significant fluctuation in the fluid pressure after ScCO 2 fracturing. This is because the crack in the coal sample does not completely penetrate the sample. Due to the surrounding rock and axial pressure, the crack closes once again in the coals, and the continuously injected ScCO 2 will drive crack propagation in the specimen repeatedly until the specimen is completely broken. Figure 9 shows the final crack propagation morphology in the cylindrical coal specimens. The coal specimen eventually formed a macrocrack under the hydraulic drive, which penetrated through the whole cylindrical specimen. Yet, there was no crack penetrating the whole fracture specimen and several widely distributed secondary cracks in the fractured coal specimens by ScCO 2 . This is because water has greater viscosity and density, which is easy to produce tensile failure in coal specimens, and eventually forms a single penetrating crack. On the other hand, because of the large diffusion coefficient and strong permeability of ScCO 2 , the influence range in the coal is large, so it is easy to form a Numerical Simulation of Fracturing in Coals Based on the PPR Model 5.1. Governing Equations. Hydraulic and ScCO 2 fracturing of coal is a complex, multifield coupling process. Compared with the multiphysical coupling problems in rock mechanic engineering, this crack propagation process in a solid is coupled with fracturing, causing it to be more challenging to model and calculate. The physical model of the fracturing process in coals is shown in Figure 10. Ω represents the entire range of the hydraulic fracturing models, and the fracture width w is located in the center of the model. The two sides of the fracture consist of coal materials. The coal is a porous medium, containing the solid skeleton and pores. In the fracture, Q represents the flow rate of the injected fracturing fluid. Some of the fluid will be filtered along the upper and lower crack surfaces and permeate into the coal through the pores. The pressure generated by the injected fracturing fluid in the fracture is defined as P f . When the fluid pressure reaches a critical value, the crack in the coal body will expand. The coal fracturing numerical simulation was performed through the pore pressure cohesive element. Figure 11 shows the pore pressure cohesive element with the fluid pressure node. When the cohesive element is affected by the external force, the upper node (1, 2) and the lower node (3,4) in the element are relatively displaced, which damages the cohesive element. Once the critical condition is reached, the cohesive element is destroyed, fracturing the material. During this process, the normal cohesion of the element also changes with the change in normal opening displacement. Also, the tangential cohesion of the element changes with the change of the tangential displacement. The relationship between the two cohesive forces and the displacement of the element nodes is the constitutive relationship of the cohesive crack. As shown in Figure 11, the pore pressure cohesive element consists of adding a group of pore pressure injection nodes (5,6) in the center of the original cohesive element. The injected fluid pressure is already included in the cell calculation model through this pressure node, and the injected fluid pressure causes the relative displacement on the lower and upper surfaces of the cohesive element, resulting in continuous damage to the cohesive element until it is destroyed. This represents crack growth in the fracturing model. In this multifield coupled numerical model, there are four governing equations, including the deformation equation of porous media in coal, the pore seepage equation of porous media in coal, the fracture flow equation in coal, and the constitutive equation of a cohesive crack in coal. (1) In this model, if the pores in the coal body are filled with a single liquid (water or ScCO 2 ), the deformation of the coal body includes deformation of the Figure 11: The pore pressure cohesive element. 8 Geofluids solid skeleton and deformation of the liquid in the pores. According to the momentum conservation equation, the following formula can be obtained: where σ is the total stress tensor of the porous media, b is the physical force of the porous medium, and ρ is the density of the coal. The coal density has the following formula: where n is the porosity of the coal, ρ s is the solid skeleton density of coal, and ρ f is the fluid density in the coal pore. According to the Biot porous elasticity theory [44] and the Terzaghi theory [45], the effective stress in coal can be calculated as follows: where δ ij is the Kronecker-delta symbol, p w is the pore pressure in the coal, and α is the Biot coefficient. The Biot coefficient is defined as follows: where K b is the total volume modulus of the porous coal media and K S is the modulus of the solid skeleton in the coal. For the incompressible solid material, K s = ∞, α = 1. If the Biot coefficient (α) is equal to 0, the porous media material will degenerate into a dense linear elastic solid material. Based on the hypothesis of small deformation, the formula is as follows: where ε i,j is the strain tensor, u i is the displacement vector, and u i,j and u j,i are the partial derivatives of displacement. The constitutive relation between the stress and strain of coal can be expressed as follows: where D ijkl is the elastic tensor of coal. (2) Pore seepage in the coal body should satisfy the following mass conservation equation: where 1/Q is the compressibility coefficient of the fluid, p w is the pore pressure of coal, and _ w w is the velocity vector of Darcy flow. The compressibility coefficient 1/Q can be calculated as follows: where n is the porosity of coal and K w is the modulus of the fluid. In Equation (16), the flow rate and fluid pressure gradient in porous media satisfy Darcy's law, and the expression is as follows: where ρ w is the density of the fracturing fluid and k w is the permeability coefficient of the fracturing fluid, which can be calculated as where μ w is the viscosity of the fracturing fluid and k is the permeation matrix. The effect of the inertia term of the fracturing fluid and the roughness of the crack surface are not considered. The fracturing fluid in the fracture itself can be divided into the tangential flow and normal flow. According to the mass conservation theorem, the fluid in the fracture should satisfy the following equation: where 1/W f is the compressibility of the fluid in the fracture, p f is the fluid pressure of the fracturing fluid in the fracture, w is the crack opening, s is the coordinate of the tangential direction along the fracture surface, q t is the flow rate of the fracturing fluid filtered from the upper surface of the fracture into the porous media, q b is the flow rate of the fracturing fluid infiltrating from the lower surface of the fracture into the porous media, QðtÞ is the flow rate of the source term of the fluid, and δðx, yÞ is the Dirac-delta function. If the fracturing fluid is incompressible, the first fluid compression term in the above equation can be ignored. The tangential flow and pressure gradient of the fracture fluid in the fracture satisfy the cubic seepage model [46,47]. They can be related by the following expression: where μ f is the viscosity of the fracture fluid. Some of the fracturing fluid in the fracture infiltrates into the coal body through the fractures. In this numerical model, the fluid flow rate is related to the gradient between the pore pressure in 9 Geofluids the coal and the pressure of the fracturing fluid in the fracture, calculated as follows: where p t and p b are the pore pressures on the upper and lower surfaces of the crack, respectively, and k t and k b are the fluid filtration coefficients on the upper and lower surfaces of the crack, respectively. The equation of the tangential fluid flow in the fracture and the equation of normal fluid flow are taken into the mass conservation equation and expressed as follows: (3)The constitutive relationship of the cohesive element in the numerical simulation of hydraulic fracturing is derived from the mixed-mode I/II PPR potential energy function. In Section 3, the fracture parameter values in the PPR CZMs of the coal were determined through the SCB tests and the PTS tests. The constitutive equations of mixed-mode I/II for a cohesive crack of the different coals were established. The normal and tangential cohesions were obtained by taking the first derivative of the normal displacement and tangential displacement, respectively, by the PPR potential function, and the stiffness matrix of the cohesive element is as follows: The stiffness components of the PPR cohesive constitutive equation are as follows: Numerical Models of Hydraulic Fracturing for the Coals. In order to compare the experimental fracturing results of the coals, the coal material parameters and boundary conditions of the coals in the fracturing numerical models were set to be the same as those in the experiments. Figure 12 shows the boundary conditions on the coal fracturing numerical geometric model with a size of 50 mm × 50 mm, as well as the meshing conditions for a section of the model. In the fracturing numerical model, the coal materials were characterized by triangular solid elements, and the pore pressure cohesive elements were inserted between the triangular solid elements to simulate multiple crack propagation driven by hydraulic or ScCO 2 fracturing. To avoid the influence of the overall stiffness of the model after a large number of cohesive elements were inserted between the solid elements (see Figure 12), the corresponding upper and lower nodes in the cohesive element and their intermediate fluid pressure nodes were defined at the same position in the local coordinate system. This numerical simulation method is called the zero-thickness element method [48]. And the fluid injection point was set at the center of the numerical model, and the fluid injection rate was set to 20 mL/min. The involved numerical simulation parameters are given in Table 3, and the PPR model parameters of the three types of coals are listed in Table 1 for the numerical simulation for the hydraulic and ScCO 2 fracturing of the coal specimens. In addition, the contrast numerical simulation of the fracturing in coals was also carried out in which the constitutive relationship of the pore pressure cohesive elements was represented by the common linear elastic fracture mechanics (LEFM). Figure 13 shows the numerical simulation results of hydraulic and ScCO 2 fracturing of the coals. The fracture initiation pressure of the coal with water or ScCO 2 is 18. 45 11 Geofluids in the coals are shown in Figure 14 based on the LFEM, and there are obvious deviations between the simulation results and the test results. Figure 15 shows the simulation results of crack growth for hydraulic and ScCO 2 fracturing in the coal samples. The crack induced by hydraulic fracturing expands along the direction of the maximum principal stress, and at the same time, secondary crack propagation occurs near the main crack. And the multiple crack propagation appears in the coal model with ScCO 2 fracturing. This is also consistent with the experimental results of crack propagation in coal specimens caused by hydraulic and ScCO 2 fracturing. Therefore, this demonstrates that the established PPR CZMs can accurately describe crack propagation behavior in varying coal types for hydraulic and ScCO 2 fracturing. Conclusion In this research, the mixed-mode I/II PPR cohesive zone model (CZM) of coals was determined using PTS and SCB tests. The constitutive relationships of the established PPR CZMs were introduced into the pore pressure cohesive elements to simulate crack growth in coals caused by hydraulic and ScCO 2 fracturing. In addition, hydraulic and ScCO 2 fracturing experiments on the coal specimens were performed, and the numerical simulation outcomes were compared with the corresponding experimental results. The following conclusions can be drawn: (1) Several key fracture parameters, including the maximum normal open displacement (δ n ), the maximum tangential open displacement (δ t ), mode I fracture energy (Φ n ), and mode II fracture energy (Φ t ), were obtained through SCB tests and PTS tests. According to the experimental results, there are visible nonlinear damage processes in the stage of postpeak loading, and the coal specimens show obvious characteristics of ductile fracture under mode I and II loading. In addition, the mode II fracture energy of coal type II is 51.62 J/m 2 , which is considerably greater compared with fracture energy of mode I for coal (22.16 J/m 2 ); this shows that the mode II crack propagation in coals will use remarkable energy in coals (2) In the hydraulic and ScCO 2 fracturing experiments of coals, the crack initiation pressure of coal specimens for hydraulic fracturing is 17.76 MPa, about 1.59 times that driven by ScCO 2 fracturing. And the crack initiation time of coal with ScCO 2 fracturing is 123.73 s, which is 1.58 times that for hydraulic fracturing. A macrocrack eventually formed in the coal specimen due to the hydraulic drive, which penetrated through the entire specimen, whereas there was no crack penetrating the whole fracture specimen and several widely distributed secondary cracks in the fractured coal specimens by ScCO 2 . This is because water has greater viscosity and density, which is easy to produce tensile failure in coal specimens, and eventually forms a single penetrating crack. On the other hand, because of the large diffusion coefficient and strong permeability of ScCO 2 , the influence range in the coal is large, so it is easy to form a wide range of tensile and shear mixed-mode cracks in the coal specimens (3) The PPR CZMs of the coal were established using PTS and SCB tests for analyzing the mixed-mode I/II crack propagation. Zero-thickness pore pressure cohesive elements were used to simulate multicrack propagation in coals caused by hydraulic and ScCO 2 fracturing. The constitutive relationships of the established PPR CZM were introduced into the cohesive elements. Overall, the numerical simulation results are consistent with the hydraulic and ScCO 2 fracturing experimental results for the coal specimens. This indicates that the established PPR CZMs can accurately represent crack propagation behavior in coals caused by hydraulic and ScCO 2 fracturing Data Availability The underlying data and figures can be found in the manuscript. Conflicts of Interest The authors declare that they have no conflicts of interest. 12 Geofluids
8,367.8
2021-09-16T00:00:00.000
[ "Materials Science" ]
Speed-Density Model of Interrupted Traffic Flow Based on Coil Data As a fundamental traffic diagram, the speed-density relationship can provide a solid foundation for traffic flow analysis and efficient traffic management. Because of the change in modern travel modes, the dramatic increase in the number of vehicles and traffic density, and the impact of traffic signals and other factors, vehicles change velocity frequently, which means that a speed-density model based on uninterrupted traffic flow is not suitable for interrupted traffic flow. Based on the coil data of urban roads in Wuhan, China, a new method which can accurately describe the speed-density relation of interrupted traffic flow is proposed for speed fluctuation characteristics. The model of upper and lower bounds of critical values obtained by fitting the data of the coils on urban roads can accurately and intuitively describe the state of urban road traffic, and the physical meaning of each parameter plays an important role in the prediction and analysis of such traffic. Introduction Flow, speed, and density are known as the basic elements of traffic flow theory.Flow can measure the number of vehicles and the demand for traffic infrastructure.Speed is an important control index in road planning, and it is also an evaluation index of vehicle operation efficiency.Density reflects the intensity of the vehicles on the road and determines traffic management and control measures.The relationships between flow, speed, and density called fundamental diagrams play a very important role in traffic flow theory and traffic engineering.For example, the speed-flow relationship can be used in highway capacity analysis in order to determine the highway service quality, and the speed-density relationship can reflect dynamic change in traffic flow, which can be used to study the disturbance propagation between vehicles.Therefore, sound mathematical models provide a solid foundation for traffic flow analysis and efficient traffic management.The relationship between speed and density which can reflect the quality of service received from the road is attracting considerable research attention. The earliest speed-density model was a linear model proposed by Greenshields et al. [1] in 1935.The linear model overlaps and classifies the observed data groups, which is proved to be unreasonable, and observation time is a holiday, with a narrow range of representations, so there are some deviations between the derived speed-density relation and the actual situation.Later, the relationship between speed and density was studied in greater depth, and the Greenberg logarithmic model, Edie model, Underwood exponent model, Pipes-Munjal model, modified Greenshields model, Newell model, and so forth, emerged in turn [2,3].Heydecker and Addison [4] studied the relationship between speed and density under various speed limits and found that zero speed induces traffic jams, not the other way around.Ma et al. [5] derived a general logistic model of traffic flow characteristics, which includes several traffic flow parameters with clear physical meanings and analyzed the effects of the parameters on speed-density logistic curves.The experimental results showed that this model can well describe the traffic flow characteristics in different states.Shao et al. [6] proposed a speed-density model 2 Mobile Information Systems under congested traffic conditions combined with the minimum safety spacing constraint, and the experimental results showed that the absolute error of this model was smaller than that of other models fitting the traffic data of two freeways.Wang et al. [7] proposed a family of speed-density models with different numbers of parameters with important physical significance and got good performance in the final experiment. All of the above studies are based on continuous traffic flow data.These data, also called uninterrupted traffic flow, are traffic flow with no effect of external fixation factors, such as freeway, urban expressway, and so forth.Discontinuous traffic flow, referred to as interrupted traffic flow, is periodically influenced by external fixation factors.The most common interrupted traffic flow is originated by signal lamps of urban intersections.Because of the variety of vehicle types, the periodic effect of signal lamps, shunts in the canal section, and other factors, the characteristics of interrupted traffic flow are very complex compared with uninterrupted traffic flow.In addition, the city is still in a rapid increase in population and, with the development of economy, people are more inclined to self-driving travel, thus more and more vehicles and more and more congestion in the city, which leads to the increase of travel time, the growth of fuel consumption [8], the aggravation of environmental pollution, and other awful issues [9,10].Compared with the highway, the urban road has a strong influence on the individual, society, and the environment.Therefore, further study of the characteristics of interrupted traffic flow to provide support for management decisions is particularly important. Research on interrupted traffic flow has attracted a lot of attention [11][12][13][14][15].Many scholars see traffic flow located at a certain distance from the intersection as continuous traffic flow, believing that it can be described by continuous traffic flow models.Some of the literature [16,17] suggests, however, that because of the short distance between intersections in the city and the influence of signal lamps, there are differences between traffic flow located at a certain distance from the intersection and the traffic flow of freeways.Because traffic data are difficult to obtain and for other objective reasons, only a few scholars focus on the speed-density model of discontinuous traffic flow.Wang et al. [18] introduced a fourparameter logit model for complete data fitting and established a speed-density logit model for left-turning, straight, and right-turning traffic flow.However, the experimental data were obtained by VISSIM simulation, and the simulation parameters were not accurate enough to depict the complex city road environment, so the experimental results have certain limitations.Wang et al. [19] thought that the stochastic model would contain more traffic information and put forward the stochastic speed-density model.This stochastic model can generate a probabilistic traffic flow model and can achieve real-time traffic prediction. In order to provide favorable data analysis and presentation for city traffic, thus to provide decision support for intelligent transportation, characterizing the speed-density relationship of interrupted traffic flow more accurately is full of importance.By analyzing a large amount of data, we propose a description method for a speed-density relationship model which is suitable for discontinuous traffic flow, using the upper and lower curves to describe the upper and lower bounds of velocity values.Because of the discrepant characteristics of the traffic flow in the outer and inner lanes, the coil data of the outer and inner lanes are analyzed and verified. Speed-Density Model Three basic parameters (flow , speed , and density ) are the core content of the traffic flow model.The three have the following relationship: that is, flow is the product of density and speed.The relationship between two parameters of the three is of great significance in traffic flow, and the relationship between speed and density has received a lot of research attention.Greenshields et al. was an early researcher, who proposed the speed-density linear relationship [1]: where is the speed of free flow, that is, the speed of vehicles unimpeded when the traffic density tends to zero, and is the density of block flow, that is, the density when the traffic flow is blocked and cannot move.As shown in Figure 1, when = 0, the speed can reach the theoretical maximum value, namely, the free flow velocity .The area surrounded by the abscissa, the ordinate of any point on the line, and the coordinate origin is the traffic flow.Equation (2) can change to Respectively, introduce (2) and ( 3) into (1), and we get Equations ( 4) illustrate that - and - are quadratic function relations, as shown in Figure 1. The linear model is too simple, and there are many deficiencies.In order to improve the model, scholars have proposed models based on the linear model but with a higher degree of accuracy.simulation software to set up and change six parameters of road traffic, including section length , stretch section length , cart rate , signal period , the ratio of the time span of left-turn green signal to signal period , and the ratio of the time span of the straight green signal to signal period , and established 22 groups of parameters.The simulation results showed that the relationship between speed and density presents an inverse S curve.Therefore, a four-parameter logit model is proposed here to describe the speed-density inverse S curve, and its expression is as follows: where min is the mean value of the minimum speed, max is the mean value of the maximum speed, is the flow value of a section, is the flow value at the inflection point of the curve, and is a parameter determining curve shape.Then, the data obtained from the 22 groups of simulation parameters were fitted.The four parameters ( min , max , , and ) were calculated for each simulation environment. and were, respectively, fitted in left-turn, straight, and rightturn cases, and the fitting results are as follows: The Description Method of the Speed-Density Model for Interrupted Traffic Flow 3.1.The Characteristics of the Data of Interrupted Traffic.Coil data for one day, three days, seven days, and fourteen days were selected to compare and analyze the discontinuous flow data and the existing six speed-density models, as shown in Figure 2. We found the following: (1) The six models' performance was poor when the coil data of interrupted traffic flow were fitted, illustrating that although suitable for uninterrupted traffic flow (2) The interval value of critical densities of one-day, three-day, seven-day, and fourteen-day data sets was [62.56 pcu/km, 71.23 pcu/km], and most of the data were located in < range, meaning unimpeded flow data accounted for the absolute proportion, so the traffic flow of the location coil was in a state of flow most of the time.(3) When < , with the increase of density, the velocity decreased sharply; when > , as the density increased, the velocity decreased slowly, and the speed variation amplitude was very small.(4) When the density was small, the speed had a large range of values, of which the largest was [23 km/h, 72 km/h].We filtered out the small-density data to obtain a scatter diagram of flow and occupancy which were directly collected by a loop detector, as shown in Figure 3.In Figure 3 it is obvious that the loop detector acquires large-range flow values for the same occupancy value, and the largest range can reach 100 pcu/h.Hence, after calculating speed and density by density formula and velocity formula, speed accordingly has a large range of values for the same density in speed-density diagram.(5) In addition, the density values were found to be near a number of points, and the difference between adjacent points was approximately equal to a certain value. From the above analysis, we found that, because of the big differences between uninterrupted and interrupted traffic flow, existing models suitable for uninterrupted traffic flow are unsuited for describing the speed-density relation of interrupted traffic flow.What is more, the flow collected by a loop detector has a large range of values.Therefore, for the speed-density relation of interrupted traffic flow, we must find a new descriptive method. Description Method of Speed-Density Relationship for Interrupted Traffic Flow.Because of the difference between the uninterrupted and interrupted traffic flow and the volatility of speed, the speed-density relationship cannot be adequately described by a single model, so we use two curves, upper and lower , to describe the supremum and infimum of velocity values: where upper and lower are, respectively, the upper and lower bounds of velocity and upper and lower are fitting functions. Divide the density interval [ min , max ] into connected intervals 1 , 2 , . . ., .Partition data as 1 , 2 , . . ., by density intervals, and correspondingly get speed sets 1 , 2 , . . ., , causing that, for any ∈ (1, 2, . . ., ), we have where ( ) is used test for with the Shapiro-Wilk normal test method.Sort independent observations in by nondescending order, recorded as 1 , 2 , . . ., , and construct the -test statistic where is the coefficient when sample size is .When the population distribution is normal distribution, the value of should be close to one. quantile of statistic can be obtained by the look-up table method.When ≤ , the original hypothesis should be rejected at the significant level, indicating that does not obey normal distribution; when > , the original hypothesis cannot be rejected, and satisfies normal distribution.Under the conditions of ( 8), for every ∈ (1, 2, . . ., ), extract the upper quantile Fit upper and lower using the nonlinear least square method.The tabulated function = ( ), = 1, 2, . . ., , is Mobile Information Systems available by (10).Then we need to obtain the fitting function, () = 0 + 1 × 1 () + ⋅ ⋅ ⋅ + × (), making the sum of squared deviations Take the minimum, of which 1 (), 2 (), . . ., () are nonmergeable monomials of variable , and 0 , 1 , . . ., are the coefficients of monomials. is a nonnegative polynomial of 0 , 1 , . . ., , so there must be a minimum value.Respectively, calculate partial derivatives of for 0 , 1 , . . ., , and make them equal to zero. Experiment and Analysis The experiment data is collected by the coil detectors underground closed to Optical Valley Walking Street in Wuhan, China.Coil detectors collect data every 15 minutes, recording time, flow, occupancy, and so forth, as shown in Table 2. Use the method in [20,21] to calculate speed and density, and the ratio of the amount of data between two model curves to the total amount of experiment data is used to describe the performance of model.The loop detector in the outer lane measures the traffic flow of straight and right-turning lanes, and the loop detector in the inner lane measures the traffic flow of the left-turning lane.The traffic flow characteristics of two loop detectors must have certain differences.Therefore, analyze the coil data of both the outer lane and the inner lane to find the diversity of their speed-density relationship. Coil Data Analysis of the Outer Lane. The experimental steps are as follows. Step 1. Analyze coil data of the outer lane and find that density values are clustered at a number of points 1 , 2 , . . ., , where the mean value of the difference between the adjacent points is about 2.5 pcu/km.Divide density into a number of intervals with length 2.5 pcu/km by 1 , 2 , . . ., . Step 3. Execute a distribution test for where the result shows that one data set is too small to meet the requirements of the test.Merge the adjacent density segments in Step 2 to enlarge the amount of the small data set.Redo the distribution test for the new data set, more than 80% of which meets the normal distribution, with totally 95% of the total data satisfying the normal distribution, which makes it reasonable to consider all the small data set satisfying the normal distribution. Step Figure 4 shows the validation result of the speed-density logarithmic model of the outer lane when upper value = 0.95 and lower value = 0.05.Equations ( 18) correspondingly are the green and blue curves in Figure 4, which is the speed-density model of interrupted traffic flow created by the new description method.Significant test results indicate that values of two regression coefficients of two curves are minima ( < 2 − 16), which means that coefficients are significant and two log models constructed with density as the independent variable are applied to estimate velocity as the dependent variable. The coil data of the outer lane for two weeks, four weeks, six weeks, and eight weeks are, respectively, selected 3 gives the ratio of the data between two logarithmic curves to the total amount of data in each case.Make a longitudinal observation; it is obvious that, with upper value increasing and lower value decreasing, the proportion increases accordingly, where amplitudes are obvious, respectively, 7.2%, 6.2%, and 6.9%.On the other hand, the main transverse trend is that the proportion increases along with the increase of experiment data loosely, where, however, sixweek data has the best performance.The above suggests that the two logarithmic models are able to describe the speeddensity relation of the outer lane.Figure 5 shows the four groups' validation results when upper value = 0.95 and lower value = 0.05. Coil Data Analysis of the Inner Lane. We select coil data of the inner lane and follow Steps 1 to 5 as for the outer lane.When fitting sets upper and lower at Step 6, we find that the speed-density models proposed by scholars all have poor performance with goodness of fit of less than 0.5, which suggests that a single model cannot accurately describe the quantile set of the coil data.Thus we consider using a segmentation model. In the density-flow curve there is a critical density , which is the density of maximum traffic flow, as shown in Figure 1.When the density < , the traffic is in a state of flow; when > , the traffic flow gradually becomes crowded.Therefore, consider using as the critical value of the subsection. A Figure 6 shows the fitting result of a segmentation model of the outer lane when upper value = 0.95 and lower value = 0.05, and ( 19) are the models corresponding to the green curve and blue curve in Figure 6, which is the speed-density model of interrupted traffic flow via the new description method. value of each parameter is very small, suggesting the coefficient is very significant. The coil data of the inner lane for two weeks, four weeks, six weeks, and eight weeks are, respectively, selected and four groups of parameters are established for the model validation, the same as that for the outer lane.Table 4 gives the ratio of the data between two logarithmic curves to the total amount of data in each case.Comparing the result with that of the outer lane, we find that the validation results of the model of the inner lane are better with greater ratio.Take a longitudinal observation; similarly, it is obvious that with upper value increasing and lower value decreasing, the proportion increases accordingly, where amplitudes are smaller than that of outer lane, respectively 3.7%, 1.9%, and 4.3%.The main transverse trend is the same as outer lane except the case of upper value = 0.95 and lower value = 0.05.The result indicates that the two segmentation models are suitable for describing the speed-density relation of the inner lane.Figure 7 shows the four groups' validation results when upper value = 0.95 and lower value = 0.05.4.3.1.Difference between the Models of the Outer Lane and the Inner Lane.The loop detector of the outer lane measures right-turning and straight lanes, and the coil is located in a road adjacent to a commercial pedestrian street with a heavy flow of people and traffic.A logarithmic model is applied to describe traffic flow with large density, and therefore it is accepted that the coil data of the outer lane satisfy the logarithmic model. Experimental Result Analysis The loop detector of the inner lane measures the leftturning lane which also has heavy traffic flow.The speeddensity relation of the inner lane does not satisfy the single log model but is suitable for the segmentation model.The logarithmic model and the inner lane meets the segment model.What is more, the road segment where loop detectors located is unblocked at most of the time; the inner lane is more unimpeded than the outer lane. Conclusion In this paper, the characteristics of urban interrupted flow data were analyzed, and it was found that they differ from the data of uninterrupted flow.Since the existing classical models cannot describe them very well, a description method of speed-density relation for interrupted traffic flow was proposed where the upper and lower curves were used as the upper and lower bounds of the predicted speed.In this method, the speed was divided into small data sets which satisfied the normal distribution, and two quantiles of normal distribution were obtained as the predicted values.Then two quantile sets were fitted to get two curves as the speed-density relation model of the interrupted traffic flow.Finally, the coil data of the outer and inner lanes were applied for model validation.The results showed that the new method can give a good description of the speed-density relationship of interrupted traffic flow and get different model results for the outer lane and inner lane, whereby the speed-density relation of the outer lane satisfies the logarithmic model and the inner lane satisfies the segment model instead of the single model, where when the density is less than critical density, it conforms to the exponential model and otherwise the logarithmic model.The fitting results of the internal and external lanes were analyzed in combination with the actual local road environment and traffic flow theory.So this model can provide favorable data analysis and presentation for city traffic, thus to provide decision support for intelligent transportation. Figure 2 : Figure 2: Comparison of six speed-density models. Figure 4 : Figure 4: Speed-density logarithmic model of the outside lane. Figure 6 : Figure 6: Speed-density multisession model of the inside lane. Table 1 : Table 1 lists results for the speed-density model, including the Greenberg model, Underwood model, Northwestern model, Newell's model, Pipes-Munjal model, Drew model, Modified Greenshields model, Del Castillo and Benitez model, Van Aerde model, MacNicholas model.These models with the parameters of important physical meaning provide good results.Speed-density models. Wang et al. [19] established a speed-density logit probability model with four parameters.Wang et al. used VISSIM Table 2 : The data example. 4. Get two quantiles as upper and lower critical values of velocity for density .(fit upper and lower ).Because the loop detector is located near commercial street which has heavy traffic, we use the logarithmic model to formulize the data. Table 4 : Validation results of the model of the inner lane. density-flow curve is obtained by local polynomial regression fitting, and the density value at the curve vertex is just .Take as the critical value and piecewise analyze
5,157.6
2016-12-19T00:00:00.000
[ "Computer Science" ]
Geometric Pseudospectral Method on SE(3) for Rigid-Body Dynamics with Application to Aircraft General pseudospectral method is extended to the special Euclidean group SE(3) by virtue of equivariant map for rigid-body dynamics of the aircraft. On SE(3), a complete left invariant rigid-body dynamics model of the aircraft in body-fixed frame is established, including configuration model and velocity model. For the left invariance of the configuration model, equivalent Lie algebra equation corresponding to the configuration equation is derived based on the left-trivialized tangent of local coordinate map, and the top eight orders truncated Magnus series expansion with its coefficients of the solution of the equivalent Lie algebra equation are given. A numerical method called geometric pseudospectral method is developed, which, respectively, computes configurations and velocities at the collocation points and the endpoint based on two different collocation strategies. Through numerical tests on a free-floating rigid-body dynamics compared with several same order classical methods in Euclidean space and Lie group, it is found that the proposed method has higher accuracy, satisfying computational efficiency, stable Lie group structural conservativeness. Finally, how to apply the previous discretization scheme to rigid-body dynamics simulation and control of the aircraft is illustrated. Introduction The aircraft is usually regarded as a single rigid body in three-dimensional space. The dynamics of a rigid body is an important problem modeling the time evolution of aircraft and other vehicles [1]. In particular, in aircraft simulation and control, its authenticity affects the fidelity of the simulation and also determines whether it can truly reflect the performance of the designed controller to a certain extent. The rigid-body dynamics has fundamental motion invariants; for example, the flow of a Hamiltonian system is symplectic [2,3], the total energy of a rigid body is conserved in the absence of nonconservative forces [4], and the momentum map associated with a symmetry of a rigid body is preserved [5]. Furthermore, the exact flow of a rigid body always stays on a configuration manifold, since the combined configurations of the translation and rotation of the rigid body construct a special Lie group called the special Euclidean group SE(3) [6]. The invariants often manifest through geometric characteristics of exact flow of a rigid body, such as symplecticity, first integrals, symmetry, and Lie group structure. However, for most rigid bodies, obtaining the analytical solution is quite difficult. Thus, under the condition of non analytical solution, it is desired to develop a numerical method whose iterates preserve the previous fundamental invariants [7]. For this purpose, the research area of development of numerical methods for rigid-body differential equations that preserve geometric properties of the numerical solution has been of great interest in recent years [8]. Simos [8,9], Monovasilis et al. [3,10], and Kalogiratou [11] construct several different multistep symplectic integrators based on symplectic geometry in order to preserve the Hamiltonian energy of rigid bodies as the harmonic oscillator, the pendulum, two-body problem, and orbital problem. Lee et al. [5] and Hussein et al. [12] adopt variational integrators to evolve the rigid-body dynamics, which ensure that while the total energy of the rigid body is conserved under conservative forces, the momentum associated with a symmetry of the system is also conserved. Iserles et al. [13] and Lee et al. [5] extend numerical methods in Euclidean Mathematical Problems in Engineering space into Lie group, so that numerical flow of the rigidbody dynamics has Lie group structural conservativeness, which is also the main focus of this paper. Retaining motion invariants under discretization has been proven not only a nice mathematical property, but also the key to improved numerics, as they capture the right dynamics (even in longtime integration) and exhibit increased accuracy [2,7,13,14]. Therefore, for driving the evolution of rigid-body dynamics, it is very important to develop a numerical method with preservation of the previous geometric characteristics. The difficulty of developing a numerical method on Lie group arises as the group is not the well-known Euclidean space R ; consequently, the rigid-body dynamics evolving on SE(3) cannot be properly integrated by conventional numerical method, including the popular Runge-Kutta schemes, developed for R . For this reason, one resorts to other means to drive the rigid-body dynamics forward while preserving the Lie group structure of the configuration. It can be roughly divided into two categories: classical Lie group methods which discretize the continuous equations of motion and Lie group variational integrators which discretize the variational principles of mechanics. Classical Lie group methods include the Munthe-Kaas (MK) method [15], the Crouch-Grossmann (CG) method [16], the Newmark-type method [17], and the commutator-free (CF) method [18], and so forth. The MK method is based on a differential equation on the Lie algebra and uses a single evaluation of the exponential map. The CG method updates the group elements by multiple evaluations using the exponential map. For detailed information, one can consult to [13]. Lie group variational integrators are proposed in recent years whose idea is to discretize the variational principles of mechanics: Hamilton's principle or Lagranged' Alembert principle [1,5,14]. Both methods have their own advantage and disadvantage. The former has a better time performance, but its accuracy is not an advantage under the same order condition. The latter preserves exactly energy, momentum, symplectic structure, and group structure [12,19] and offers a particularly robust and efficient framework for simulation [14], attitude control [4], motion control [1], and trajectory generating [14] of the aircraft and other vehicles, however; it has no advantage of accuracy, and its performance is subjected to the Newton-type solver for solving the equivalent vector equation [5,14,20]. Regardless of the classical Lie group methods or Lie group variational integrators, they are all based on Kenth Engø's idea about equivariant map which transforms the differential equations evolving on a Lie group into an equivalent differential equation evolving on a Lie algebra corresponding to the Lie group [21]. Since accuracy, time performance, conservativeness, and numerical stability of the aircraft rigid-body dynamics need to be considered comprehensively in aircraft simulation and control, we do not intend to adopt any of the previous methods but use the same idea of equivariant map to develop a new method on SE(3) for driving the evolution of aircraft dynamics. We resort to pseudospectral method, which is widely used in fluid mechanics, quantum mechanics, linear and nonlinear waves, aerospace, and other fields by virtue of its high accuracy, spectral (or exponential) convergence rates, requirement for less computer memory under the same precision condition, and so forth [22]. Furthermore, it is commonly used in optimal control of the aircraft and other vehicles [23] and thereby lay the foundation for our simulation and control application. Unlike variational integrator must analytically derive the first-order discrete necessary optimality, which, more often than not, is nontrival for most optimal control problems [23], typical pseudospectral method can be used to transcribe a continuous optimal control problem into a discrete nonlinear programming problem (NLP), and it has been shown that solving the NLP derived from the pseudospectral transcription of the Gauss form is exactly equivalent to solve the discrete first-order necessary conditions [23,24]. However, when applied to rigid body dynamics directly, general pseudospectral method cannot preserve Lie group structure. For this purpose, a numerical method on SE(3) called geometric pseudospectral method is proposed in this work, which provides satisfactory accuracy, computational efficiency and preserves the essential Lie group structure. To our knowledge, Moulla et al. [25] was the first to pose the concept of "geometric pseudospectral method" not long ago. They suggested a polynomial pseudospectral method that preserves the geometric structure of port-Hamiltonian systems, the phenomenological laws, and the conservation laws without introducing any uncontrolled numerical dissipation. However, their method was designed only for port-Hamiltonian systems having a special structure, that is, the Dirac structure, thus it could not be directly extended to the systems on SE(3). Thereby, how to apply the pseudospectral method to a dynamics system on SE(3) and preserve essential Lie group structure of the system is still an open question, which is the main topic in this paper. In this work, we establish a completely rigid-body dynamics model of aircraft in body-fixed frame on SE (3). With respect to kinematics, due to the fact that applying the general pseudospectral method directly to the configuration equation of the rigid-body dynamics could not preserve the Lie group structure of the solution of the equation, drawing on the equivariant map, we transform the configuration equation on SE(3) into an equivalent equations in a Lie algebra space and accordingly give the top eight orders reduced truncated Magnus series expansion with its coefficients ( ) (refer to the appendix) of the solution of the equivalent equation under left trivialization. For the condition of each term in [2 ] ( ) being multivariate quadrature, we apply the general pseudospectral method to compute quadrature weights, so that obtaining the values of [2 ] ( ) at the collocation points and the endpoint (collectively referred to discrete points). Finally, we compute configurations at the discrete points via coordinate map. It is worth mentioning that the theories with respect to Lie group methods in the extensive literatures [13][14][15] are almost derived under right trivialization and thus are not suitable for our method, since our modeled aircraft configuration model is left invariant. To this end, we provide a relatively complete set of left trivialization framework for developing geometric pseudospectral method on SE (3). With respect to dynamics, considering the Lie algebra space is isomorphism to R , we use the general pseudospectral method directly to compute the velocities at the same discrete points as that of configuration. Under the same 4th-order and the same large step size, the proposed method is compared with implicit RK, explicit RKMK, and Gauss pseudospectral method. Through a comprehensive comparison of accuracy, computational efficiency, and Lie group structural conservativeness, the proposed method has higher accuracy, satisfying computational efficiency, stable Lie group structural conservativeness. The proposed method is coordinate-free, no need to switch chart and special handling of singularities. Finally, we give a way of applying the proposed numerical method to rigid-body dynamics simulation and control of the aircraft. The rest of the paper is organized as follows. In Section 2, a completely left invariant rigid-body dynamics model of the aircraft on SE(3) is established. Geometric pseudospectral method on SE(3) is developed in Section 3. In Section 3.1, for completeness, the basic idea of the general pseudospectral method in Euclidean space is roughly introduced. In Section 3.2, geometric pseudospectral method on SE (3) is developed under left trivialization, and a 4th-order geometric pseudospectral algorithm is given. In Section 3.3, numerical tests are carried out on a free-floating rigid-body model in comparison with four other numerical methods to validate numerical accuracy, computational efficiency, and structural conservativeness of the proposed method. Then, how to apply the proposed method to rigid-body dynamics simulation and control of the aircraft is illustrated in Section 4. Finally, the conclusions and future work are outlined in Section 5. Aircraft Rigid-Body Dynamics Model on SE(3) The special Euclidean group SE(3) is the semidirect product of SO(3) and R 3 , whose group element = ( , ) consists of rotational component ∈ SO(3) and translational component ∈ R 3 of the body-fixed frame { }, relative to = +1} is called the special orthogonal group [6]. In this section, we will regard the aircraft as a single rigid body in three-dimensional space and establish a rigid-body dynamics model of the aircraft on SE(3). Kinematics Model. The navigation equations of the aircraft are given by [26] [ . (2) Denote [ , , ] and [ , V, ] by ( ) and ( ), respectively, then we havė Also, the kinematic equations are given by [26] [ where, , , and are roll angle, pitch angle, and yaw angle, respectively, which are commonly referred to as Euler angles; , , and denote the roll rate, pitch rate, and yaw rate, respectively, about -body axis, -body axis, and -body axis. Then, we have where ( ) and ( ) are, respectively, called the homogeneous representation of ( ) and that of ( ) = [ ( ), ( )] ∈ se(3), se(3) = so(3) × R 3 is the Lie algebra corresponding to SE(3) where so(3) is the Lie algebra corresponding to SO(3) [6]. The right hand of (7) is the infinitesimal generator of the action corresponding to ( ), and it is left invariant under left multiplication by constant matrices [27]. These properties determine the trivialization way of local coordinate map in Section 3.2.1. Dynamics Model. The force equations of the aircraft are given by [28] [ where,,V , anḋdenote -axis component, -axis component, and -axis component, respectively, of the aircraft acceleration in the body-fixed frame, is the aircraft mass, is the wing reference area, is the free-stream dynamic pressure, is the engine thrust, and , , , , and , denote total -axis force coefficient, total -axis force coefficient, and total -axis force coefficient, respectively. General Pseudospectral Method in Euclidean Space. General pseudospectral method will be used for computing the velocities in the Lie algebra. For this purpose, we briefly describe the basic principle of general pseudospectral method [24]. Let us take solving the following differential equation in R at , for example, Firstly, we equally divide a time interval [ 0 , ] into several time subintervals [ , +1 ] having a length ℎ and transform the time interval ∈ [ , +1 ] to the time interval ∈ [−1, 1] via the affine transformation Thus, accordingly, (15) We approximate the solution of (17) by the following formula: where ( ) = ∏ =0, ̸ = ( − )/( − ) is the Lagrange polynomials, satisfying the isolation property Equation (18) together with the isolation property leads to the fact that thus ( ) = ( ). Through expressing the derivative of the Lagrange polynomials at the collocation points in differential matrix form We can write (17) at the collocation points as a set of differential algebra equations as follows: Based on the previous equations, we establish defect equations and apply iterative algorithms to the previous equations so that determining the approximation to ( ) at the collocation points. Finally, according to the following formula, we obtain the approximation to (15) at the endpoint +1 of time subinterval [ , +1 ]: ( )d is quadrature weight corresponding to the collocation points. This approximate solution will be initial value of ( ) over [ +1 , +2 ] and so on. According to different selection methods of collocation points, pseudospectral methods can be divided into the standard method and the orthogonal method. Common collocation points in the orthogonal method are those obtained from the roots of either Chebyshev polynomials ( ) or Legendre polynomials ( ) belonging to the orthogonal polynomial [29]. The benefit of using the orthogonal method over the standard method is that the quadrature approximation to a definite integral is extremely accurate [24]. Furthermore, according to whether the endpoint is or is not a collocation point, pseudospectral methods fall into three general categories [30]: Gauss methods, neither of the endpoints −1 or 1, is a collocation point; Radau methods, at most one of the endpoints −1 or 1, are a collocation point; Lobatto methods, both of the endpoints −1 and 1, are collocation points. In the rest of this paper, we will develop our geometric pseudospectral method based on Legendre-Gauss points, abbreviated as Gauss points in the following, for two main reasons: firstly, there is an intimate relationship between Gauss method and some implicit RK method whose coefficients in the Butcher Tableau can be used by our method for computing configuration [2]; secondly, when Gauss method is used for transcribing the continues optimal control problem into a discrete nonlinear programming problem, it does not suffer from a defect in the optimality conditions at the boundary points due to the endpoints being not collocation points [23]. (3). First recall and rewrite the aircraft rigid-body dynamics model Equations (7) and (14) in Section 2 as follows: Geometric Pseudospectral Method on SE . Step 2.2.4. End child loop; Step 2.2.5. Use the final iterative results (step) and (step) in child loop to compute the velocity ( +1 ) at the endpoint +1 , Step 2.2.6. Compute configuration ( +1 ) at +1 : Step 2.2.6.1. Compute 4th-order reduced truncated Magnus series expansion [4] ( +1 ) at +1 , [4] Step 2.2.6.2. Use the Cayley map to update configuration ( +1 ), ( +1 ) = ( ) ⋅ cay ( [4] ( +1 )) Step 2.2.7. Compare +1 with , so that determine whether or not to terminate the main loop; Step 3. End loop. For the kinematics Equation (25), it is well known that the solution of (25) stays on SE(3) for all ≥ 0 . How to use pseudospectral method to solve (25) while preserving the structural feature of the differential equation under discretization of ( ) is the main problem to be solved in this section. In R , both the solution space and the tangent space are linear vector space. Classical numerical methods, including general pseudospectral method, just rely on domain space consisting of vectors. However, the special Euclidean group is a nonlinear manifold, so using classical numerical methods directly to solve differential equation on SE(3) will be not able to preserve its Lie group structure. Reference [21] indicated that any differential equation in the form of an infinitesimal generator on a homogeneous space (a manifold with a transitive Lie group action) is shown to be locally equivalent to a differential equation on the Lie algebra corresponding to the Lie group acting on the homogenous space. Also, a Lie algebra corresponding to the Lie group is a Mathematical Problems in Engineering 7 vector space with the additional structure of a commutator. For the previous reasons, the Lie algebra is the natural choice of space for our geometric pseudospectral method. First, we will apply the equivariant map to transform (25) evolving on SE(3) into an equivalent differential equation evolving in se(3). Next, we use the Gauss pseudospectral method to solve the equivalent differential equation in order to obtain th-order truncated approximation of the equation at the Gauss points and the endpoints, followed by mapping the approximate solution back to configuration space via local coordinate map . For the dynamics equation (26), because the velocity ( ) belongs to se (3), and actually se (3) is isomorphic to the R 6 , we can use Gauss pseudospectral method directly on (26) to compute the velocities at the Gauss points and the endpoint. Computing Configurations at the Gauss that is, the following diagram commutes of equivariant map : First, from the definition of an action of on M, we can obtain an equivariant map Φ : → M with respect to the left translation action of on itself and an action of Φ on M as follows: It is known that there is a local coordinate map : g → on with the Lie algebra g; the most typical one is exponential map exp : g → . At this point, we need to find an action of on g such that will be an equivariant map with and the left action of on itself, namely, In the case where is the exponential map, is nothing else than the well-known Baker-Campbell-Hausdorff (BCH) formula: where log : → g is called the logarithm map. Since composition of two equivariant maps is also an equivariant map, we can construct an equivariant map Φ ∘ from g to M with respect to the action on g and Φ on M. Diagram commutes of composition Φ ∘ are as follows: The theorem 3.6 of [21] stated that if is an equivariant map, then the infinitesimal generators of the action with respect to the same element ∈ g are -related vector fields. Thus, the infinitesimal generators g and M of the flows exp( ) and Φ exp( ) , respectively, on g and M are Φ ∘related; that is, Finally, we need to determine what g is, and the following theorem gives the conditions that it needs to meet. The following commutative diagram describing the equivariant maps of composition Φ ∘ (left) and its infinitesimal generator (right) sums up the previous processes: Also, [21] stated that any differential equation on M which can be written as an infinitesimal generator fits into the previous framework, including all Lie type equations, isospectral flows, and rigid frames. It is noted that, as mentioned in Section 2.1, the right hand of (7) or (25) is an infinitesimal generator on SE(3), and here we assume that the solution ( ) of (25) can be written as the following form [13]: In order to obtain the explicit expression of ( ), we need to solve the following differential equation: For deriving the previous formula, we differentiate (35) as follows: Comparing (37) with (25) and (35), we have Taking the inverse of previous formula, we obtain (36). It seems from the equivariant map and the previous formula derivation that solving equation (25) can be transformed into a research on the equivalent equation (36), and the manifold M is SE(3) at this point. d −1 − ( ( )) has different forms, according to the different choices of local coordinate map . In the case where is the exponential map, the vector field d −1 − on g can be represented as [13] d exp −1 − ( ( )) = ( ) + 1 2 ad ( ( )) where the adjoint operator ad : g → g is Lie bracket or commutator, ad ( ( )) = [ , ( )], satisfying recursive expression ad ( ( )) = [ , ad −1 ( ( ))], and are Bernoulli numbers [31] In the context of rigid-body motion, the right trivialization corresponds to the differential equation with tangent vectors ∈ X(M), ∈ g, and ∈ , and the left trivialization corresponds to the differential equation with tangent vectors ∈ X(M), ∈ g, and ∈ . The former represents the change of configuration in a space frame, and the latter represents the change of configuration in the bodyfixed frame. Here, let d be the left trivialization of , since the configuration model established in Section 2.1 is in the body-fixed frame. In the case of d being the left trivialization and being the exponential map, we can find an explicit The previous formula can be rewritten as an infinite sum ( ) = ∑ ∞ =0 ( ), where each ( ) is a linear combination of terms that have equal numbers of integrals (exclude the outermost integral from 0 to ) and commutators. We denote each term of ( ) by ( ) and associate it with a binary tree ∈ T , where T is a set which includes all trees with the same number of integrals and commutators. Then, we can rewrite (41) as where It is worthwhile to note that, unlike right trivialization in [13], here, we take left-trivialized tangent of local coordinate map, thus̃= Based on the binary tree theory, we can find the following th-order approximate solution of (36), where F is the set of all -power rooted trees. Thereby, we give the Magnus series terms with its coefficients ( ) under left trivialization in Table 2, which correspond to the top eight orders rooted trees. Then, we use Gauss pseudospectral method to solve (45), so that we get the approximate solution of (36) at the Gauss points < 1 , . . . , < +1 and the endpoint +1 ≜ + ℎ. To begin with, assume that all of velocities ( ) at the Gauss points are known previously; we use the Lagrange polynomials at these points to approximate the velocity at any time ∈ [ , +1 ] as follows: where each term ( ) is the following form of multivariate quadrature: where = { 1 , . . . , ∈ R| 1 ∈ [0, ], ℓ ∈ [0, ℓ ], ℓ ∈ {1, 2, . . . , ℓ − 1}, ℓ = 2, 3, . . . , } and L denotes multivariate function. One has wherẽ= { 1 , . . . , ∈ R| 1 ∈ [0, 1], ℓ ∈ [0, ℓ ], ℓ ∈ {1, 2, . . . , ℓ − 1}, ℓ = 2, 3, . . . , } and is similar to −1 . Through computing ( ) of (47) term by term, we obtain th-order approximate solution [ ] ( ) of ( ) at the discrete points. As seen from (48), ( ) depends on the velocities at the Gauss points, and the velocities need to be solved in an iterative manner which will be explained in next section. Thereby, the solutions of configuration at the Gauss points need to be also determined in an iterative manner. Finally, we use [ ] ( ) at the Gauss points to compute configuration corresponding to the same points via the following formula: There are several choices of local coordinate map : g → , such as exponential map and Cayley map [21]. We incline Table 2: Magnus series terms corresponding to the top eight orders rooted trees. Computing Velocities at the Gauss Points and the Endpoint. As mentioned above, here, we can directly use general pseudospectral in Section 3.1 to compute velocities at the Gauss points 1 , . . . , and the endpoint +1 . However, due to (26) being implicit, we have to use iterative algorithm to solve it. When computing configurations in the preceding section, we assumed that ( ), = 1, . . . , in (46) are known in advance. Hence, we need to give their solutions, and we are going to do it in this section. To begin with, we write (26) at all of the Gauss points in a compact matrix form, . . . wherẽ∈ R × is the submatrix of in Section 3. ] . By computing the velocity deviation between adjacent iterative steps, we determine whether or not to terminate the iterative process. It is noteworthy that we must ensure that computing configurations and velocities at the same Gauss points. Finally, after the final satisfying iterative results of the velocities at the Gauss points are obtained, we use the following formula to compute velocity at the endpoint: 4th-Order Algorithm of Geometric Pseudospectral Method. In accordance with the aforementioned basic principle of geometric pseudospectral method, we develop a 4thorder algorithm for the rigid-body dynamics evolution over time. Implementation and the Order Condition. Legendre-Gauss point is the root of Legendre polynomial In fact, a convenient way to compute the Legendre-Gauss points is via the eigenvalues of the following tridiagonal Jacobi matrix: Due to the fact that (25)- (26) are implicit, therefore, we have to use iterative algorithm to update the velocities and configurations at the Gauss points. In the interactive processes, we need to compute the velocity deviation and configuration deviation, respectively, between the adjacent interative steps. For velocity deviation, since the Lie algebra space se (3) in which the velocities belong is isomorphic to R 6 , so we are able to use minus directly. However, for configuration deviation, because configuration space is a nonlinear manifold, we introduce the following natural error of configuration [27]: By the way, the reason why we choose the previous metric is its intuitive physical meaning; taking SE(2) as an example, the 2-norm ‖( (step+1) ) (step) ‖ 2 denotes angle difference Δ = (step+1) − (step) between configurations, and the 2- between configurations. The last one required to pay attention to is to ensure computing configurations and velocities at the same Gauss points. Turning to order condition of the algorithm, there are something worthy of our attention. Firstly, if we consider the time symmetry of the Magnus series expansion, the number of terms belong to (47) can be further reduced [13]. According to the time symmetry, function [ ] ( ) can be expanded as the odd power of time , and then [2 ] ( ) = 2 −1 ( ) + O( 2 +1 ). For this purpose, we provide the top eight orders reduced truncated Magnus series expansion in Table 3 which have been used in step 2.2.3.3 and step 2.2.6.1 of Algorithm 1. Secondly, let the Magnus series expansion be truncated up to the th-order trees for ( +1 ) and −1th-order trees for the intermediate stages ( ), then the resulting schemes have th-order [32]. Moreover, [2] stated that the interpolatory quadrature formula with Gauss points is 2 th-order. In summary, our 4th-order geometric pseudospectral method only depends on two Gauss points. Numerical Test on Free-Floating Rigid Body. In this section, the performance of Algorithm 1 is tested through evolution of the following free-floating rigid body dynamics equations (60)-(61) on SE (3). Note that, under the condition of no external force, (60)-(61) are equivalent to (25)- (26). Table 1 summarizes the parameters used in the numerical test as follows: ] . (61) For making the numerical test more comparable, we select four 4th-order numerical methods used to compare with geometric pseudospectral method (GPM), including ode45 [33], implicit 4th-order Runge-Kutta (RKi), explicit 4th-order RKMK (RKMK) [13], and 4th-order Gauss pseudospectral method (PM) [24], where iterative number and tolerance of the child loop of GPM and PM are set to 10 and 10 −14 , respectively. Among the previous methods, ode45 is the build-in integrator from MATLAB which is a variable-step method and whose results would be used as the ground truth with a specified tolerance 10 −14 . Both RKi and PM are the methods in Euclidean space. RKMK is an approximate Runge-Kutta method on Lie group [15]. It is worth mentioning that each test must be carried out under the same step size. In order to intuitively present the difference between these methods, the 240 seconds evolving trajectories of the free-floating rigid body for a large step size are shown in Figure 1. As seen from the evolution versus time, the positions of PM is closest to the ground truth, and that of GPM take second place, but that of RKi and RKMK deviate from the true values to varying degrees. The potential reason resulted in accuracy of GPM being lower than PM is that position of PM is computed by iterative operator with specified iterative numbers and tolerance directly, but position of GPM is truncated to a specific order before iterative operator. Despite inferior to PM, the proposed method is obviously superior to RKi and RKMK in position accuracy. As can be seen, error accumulates faster in translational components than in rotational components . This could be explained by the fact that rotation is decoupled from translation and hence has its own error dynamics, whereas translation suffers from small cumulative errors in rotation. In addition, we also give the total computational time in the first line and the average time per step in parenthesis. Obviously, RKMK needs the least time since it is an explicit method. Both GPM and PM need less time than RKi. From orientation, linear velocity, and angular velocity, there is no obvious difference, so we need to further compare these methods quantitatively. In order to quantitatively compare the proposed method with the other three methods, deviation statistics and runtime statistics over 10 runs using a wide range of initial conditions on a 240 seconds evolution are provided in Figure 2. Figure 2(a) shows the position deviation against timesteps. It conforms that the accuracy of the GPM is superior to RKi and RKMK under the same large step size, although it is inferior to PM in average position deviation, and the relationship of position deviation between the four methods is consistent with that of Figure 1. In Figure 2(b), we give the orientation deviation against timesteps by calculating the metric ‖log SO(3) ( ode45 )‖ 2 of SO(3) [27], where ode45 denotes the orientation of the free-floating rigid body which is computed by ode45. It should be noted that ode45 transforms from the unit quaternion [26], since in ode45 unit quaternion is selected for parameterizing orientation at this point. Figure 2(c) shows the averaged computation time of four methods. As can be seen, computational efficiency of GPM is inferior to that of RKMK, because RKMK is an explicit method. However, our method requires less computation time than RKi and is nearly twice as efficient as RKi on average. Finally, we examine the conservativeness of Lie group structure. Considering that configuration is parameterized by unit quaternion in ode45 and implicit RK method, we only compare the proposed method with explicit RKMK and Gauss pseudospectral method. As mentioned, the group element of SO(3) satisfies = , and we compute deviation ‖ 3×3 − ‖ ∞ for evaluating structural conservativeness of the algorithms [5]. Figure 3 illustrates that Gauss pseudospectral method in Euclidean space has no conservativeness of Lie group structure, since it is not a Lie group method; RKMK and geometric pseudospectral method on Lie group are able to preserve Lie group structure with accuracy approaching to machine accuracy. Simulation and Control Application of Aircraft It is seen from the numerical test in the previous section that the proposed method has better accuracy, stable structural conservativeness, and satisfying computational efficiency. Thus, it is able to meet the fidelity and timeliness requirements of aircraft dynamics simulation. In view of the aircraft dynamics is complex and underactuated, we adopt optimal control to generate flyable trajectories of aircraft. As mentioned in Section 1, pseudospectral method of the Gauss form can be easily applied to optimal control of the aircraft or other vehicles and satisfies optimization condition. Reference [23] shows that solving the NLP derived from the pseudospectral transcription of the Gauss form is exactly equivalent to solve discrete first-order necessary conditions. The KKT conditions of the NLP are exactly equivalent to the discrete first-order necessary conditions of the Bolza problem. And, KKT multipliers can be mapped to the costate of the continuous-time optimal control problem via costate mapping principle. Because our geometric pseudospectral discretization scheme is also based on Gauss points, we can use it to transcribe a continuous aircraft optimal control problem into a discrete nonlinear programming problem (NLP). Then, the resulting NLP can be solved by many wellknown optimization techniques [34,35]. For this purpose, we follow the Gauss pseudospectral transcription approach introduced in [36]. The optimal control problem can be formulated as in Algorithm 2. Depending on the specified scenario, cost functions include minimum control effort, minimum time, obstacle avoidance, and the quickest manoeuver. For further details regarding the optimization setup, one can consult to [36]. In this work, referring to the geometric optimal control framework proposed in [37], rotation component of is not treated as our decision variable, for three reasons: firstly, during the optimization, can be reconstructed fromî nternally through evaluating (45), (51), and (54); secondly, the Lie group structural conservativeness could not be considered by NLP solvers, unless additional constraint with respect to structural conservativeness is introduced, which may affect optimality; moreover, computational efficiency is improved. We take obstacle avoidance as our example, with optimal control which is implemented using the combination of GPOPS [36] in MATLAB and SNOPT package in TOM-LAB version 7.7 [38]. Figure 4 shows simulation results for an aircraft avoiding obstacles. Their detailed analysis is the subject of on-going work and will be presented in a future paper. Conclusions A completely left invariant rigid-body dynamics model of aircraft on SE(3) is established. For the left invariance of rigidbody dynamics model in body-fixed frame, an equivalent differential equation on se(3) is given under left trivialization. Accordingly, based on Magnus series expansion and exponential map, the solutions of configuration equation at the Gauss points and the endpoint are obtained. Velocities on se(3) at the discrete points are computed based on general pseudospectral method, since se(3) is isomorphic to R 6 . Thereby, a 4th-order numerical method called geometric pseudospectral method is developed. Through a series of numerical tests and comparison with other numerical methods on Lie group and Euclidean space, the proposed method has a comprehensive advantage in accuracy, computational efficiency, and Lie group structural conservativeness. Finally, the way of applying our method to aircraft simulation and optimal control is illustrated. For the future work, we will further analyze its performance in aircraft simulation and control. For presence of a large number of tedious commutator operator in multivariate quadrature, we will try to find an alternative method to simplify the calculation process so that extend our method to the higher order. Moreover, we will extend the proposed method to a broader kind of left invariant rigid-body dynamics systems in engineering.
8,572.2
2013-04-28T00:00:00.000
[ "Mathematics" ]
CD24 and APC Genetic Polymorphisms in Pancreatic Cancers as Potential Biomarkers for Clinical Outcome Background There are no validated biomarkers that correlate with the prognosis of pancreatic ductal adenocarcinoma (PDA). The CD24 and adenomatous polyposis coli (APC) genes are important in the malignant transformation of gastrointestinal cells. This study examined APC and CD24 genetic polymorphisms and their possible impact on survival of patients with PDA. Methods Clinical and pathological data as well as blood samples for extracting DNA were obtained for 73 patients with PDA. Real-time PCR assessed genetic variants of APC (I1307K and E1317Q), and four different single nucleotide polymorphisms (SNPs) in the CD24 gene: C170T (rs52812045), TG1527del (rs3838646), A1626G (rs1058881) and A1056G (rs1058818). Results The median age at diagnosis was 64 (41–90) years. Thirty-one patients (42.5%) were operable, 16 (22%) had locally advanced disease and 26 (35.5%) had disseminated metastatic cancer. The malignancy-related mortality rate was 84%. Median survival was 14 months (11.25–16.74). Survival was similar for wild-type (WT), heterozygous and homozygous variants of the APC or CD24 genes. The three most frequent CD24 SNP combinations were: heterozygote for A1626G and WT for the rest of the alleles (14% of patients), heterozygote for C170T, A1626G, A1056G and WT for the rest (14% of patients), and heterozygote for C170T, A1056G and WT for the rest (10% of patients). All patients were APC WT. The first two groups were significantly younger at diagnosis than the third group. Conclusions Specific polymorphisms in the APC and CD24 genes may play a role in pancreatic cancer development. Correlation with survival requires a larger cohort. Introduction Pancreatic ductal adenocarcinoma (PDA) is known for its high mortality and dismal prognosis. The overall 5-year survival rate is <6%. In the US in 2015, an estimated 48,960 new patients will be diagnosed, and 40,560 patients will die from their illness. [1]. PDA is caused as a result of mutations in cancer-associated genes, the majority of which are sporadic, and only about 15% are germline mutations. Sporadic mutations affect several key genes. An early-onset mutation appears in the K-Ras oncogene in 90% of the cases. Other typically affected genes are tumor suppressor genes, such as TP53, CDKN2A and SMAD4, with new candidate drivers of pancreatic carcinogenesis recently identified (KDM6A and PREX2) [2]. Mutations in moderately penetrant susceptibility genes, such as BRCA2, CDKN2A, and MLH1, account for <5% of pancreatic cancers, suggesting that much of the inherited risk to this disease may be due to low-penetrance common genetic variants [3]. Chemotherapy has little success in recurrent or disseminated pancreatic cancer. The best therapeutic results have been achieved with a combination of three different chemotherapy agents, yielding a median survival of less than one year [4]. These figures highlight the need for further research and understanding of the molecular pathways driving this malignancy. CD24 is a membrane mucin-like protein, anchored by glycosylphosphatidylinositol (GPI), and its core protein is composed of 27-31 amino acids. It is expressed predominantly on hematopoietic cells, mostly B-cell precursors, cells of the central nervous system and epithelial cells [5][6][7]. It functions as an adhesion molecule, binding to P-selectin on platelets and endothelial cells, as well as to L1 protein expressed on lymphoid and neural cells [5]. The CD24 gene has several genetic variants, arising from single nucleotide polymorphisms (SNPs). These include 170 C!T , a polymorphism that leads to an amino acid substitution from Alanine to Valine, in a location connected to membrane linking through the GPI anchor [8][9]. Three additional SNPs are located in the 3'-untranslated region, and they include 1626 A!G , 1056 A!G and 1527 tgde , which may affect CD24 mRNA stability. Previous studies have shown that CD24 is a marker for a variety of cancer stem cells, including pancreatic cancer [10]. Changes in CD24 expression in cancer cell lines can alter cellular properties and tumor growth. We had previously shown that treatment of pancreatic and colon cancer cell lines with anti-CD24 monoclonal antibodies (MAbs) or CD24 downregulation using short hairpin (sh)RNA effectively inhibited cell proliferation in vitro and retarded tumorigenicity in xenografted nude mice [11]. The role of CD24 in PDA is still unclear. It has been linked to poor differentiation, but not to survival [12] The tumor suppressor gene, APC (adenomatous polyposis coli), has been extensively investigated and linked to colorectal cancer development [13,14]. Laken et al. reported a germline missense mutation that caused the substitution of T to A at nucleotide 3977, leading to the insertion of lysine (K) instead of isoleucine (I) at codon 1307 (I1307K). This is believed to cause instability, thus possibly contributing to malignant transformation [15]. Another polymorphism, the APC E1317Q variant, is a substitution of glutamic acid (E) for glutamine (Q) at codon 1317 (E1317Q). This results from a G to C substitution at nucleotide 4006 and may be linked to cancer development [16]. The role of APC in PDA is not clear. It has been shown that pancreatic tumors failed to develop following conditional inactivation of APC in the pancreas, suggesting that APC is required for tumorigenesis in the pancreas [17]. An earlier study that examined APC E1317Q and I1307 in PDA found no association in a cohort of 58 patients [16]. The current study aimed to investigate a possible association between the clinical course of PDA and genetic alterations in the CD24 and APC genes. Participants Newly diagnosed patients with PDA who were treated at the Department of Oncology at Tel Aviv Sourasky Medical Center, Tel Aviv, Israel, between 2000-2014, were prospectively recruited to the study. After obtaining written informed consent, blood samples were taken for analysis and genotyping of the CD24 and APC SNPs. The patient's files were retrospectively viewed, and clinical and pathological data were collected, including demographics, disease stage, treatment, response to chemotherapy and survival. The trial was approved by the local Institutional Review Board (IRB, Sourasky medical center Helsinky committee) and the Israeli Ministry of Health (Helsinky approval number 02-130, Israeli ministry of health application 919990171). Genetic Analysis Assay methods. Blood was collected in tubes containing EDTA. Peripheral blood leukocytes (PBLs) were isolated from whole blood samples by collecting white buffy coats obtained after blood centrifugation for 3 minutes at 4°C at 3,000 rpm and discarding the plasma supernatant. DNA Extraction Genomic DNA was extracted from PBLs by standard methods as described by Miller et al. [18]. Determination of the Different CD24 Polymorphisms Real-time PCR (qPCR) was used to genotype rs8734 (C170T), rs3838646 (TG1527del), rs1058881 (A1626G), and rs1058818 (A1056G) CD24 polymorphisms using Custom TaqMan SNP Genotyping Assays predesigned by Applied Biosystems (Applied Biosystems, Foster City, CA, USA), following the technical procedures recommended by the manufacturer. The assay reagents for SNP genotyping from the Assays-by-design consisted of a 40X mix of unlabeled PCR primers and TaqMan MGB probes (FAM and VIC dye-labeled). These assays were designed for the genotyping of the specific SNPs. The following primer sequences were used: For the rs8734 polymorphism: forward primer, GGTTGGCCCCAAATCCA; reverse primer, GACCACGAAGAGACTGGCTGTT; allele 1 (VIC), CACCAAGGCGGCT; allele 2 (FAM), CACCAAGGTGGCTG. For the rs3838646 and rs1058881 polymorphisms: Assay ID AH0JB89 and Assay ID AH1SAFH, respectively (Applied Biosystems). The conditions for PCR amplification were identical. The PCR reactions were carried out in a 10 μl volume containing 20 ng of genomic DNA and prepared using TaqMan Universal PCR Master Mix components (Applied Biosystems), which contained nucleotides, buffer, uracil-Nglycosylase, amplitaq, and a passive reference dye (ROX). The reaction mixture contained 5.0μl of TaqMan Universal PCR Master Mix, 2.75μl of H 2 O, 0.25μl of assay components (primer set and probe specific for each polymorphism), and 2.0μl of DNA from each sample. A Step One Plus Real-Time PCR System (Applied Biosystems) was used to perform the qPCR experiments using the following cycling protocol. The cycling was initiated by pre-heating at 600°C for 30 seconds and 950°C for 10 minutes, followed by 40 cycles of 920°C for 15 seconds, 600°C for 1 minute and 600°C for 30 seconds. The Stepone v2.2 (Applied Biosystems) program was used to interpret the reaction results, using the graphical representation of the VIC and FAM fluorophore emissions with respect to constitutive ROX emissions. Assessing the I1307K Polymorphism at the APC Gene Assessing the E1317Q Polymorphism at the APC Gene Genomic DNA was amplified by PCR using a forward primer (5'-GAAATAGGATGTAATCA GACG-3') and a reverse primer (5'-CACCACTTTTGGAGGGAGA-3'). Primers and detection of the specific polymorphic nucleotide (G/C at position 4006 of SEQ ID NO: 7) was by real-time PCR using the anchor primer: TGCTGTGACACTGCTGGAACTTCGC-FL (SEQ ID NO: 11) and sensor primer (ph-LC-Red705-CACAGGATCTTGAGCTGACCTAG (SEQ ID NO: 12). Statistical Analysis Descriptive statistical values for all variables were calculated for the entire study population. These included the median, range, frequencies and relative frequencies as applicable according to variable type. The relative frequencies of each gene permutation (combination) were calculated, and the three most frequent permutations were further analyzed. Descriptive statistics were calculated for each permutation group. Differences in categorical variables between the various permutation groups were analyzed using the Chi-squared test of independence. Differences in demographic quantitative variables were analyzed by nonparametric ANOVA with the Kruskal-Wallis test (for omnibus tests) and further with the Wilcoxon rank-sum test if pair comparisons were indicated by the former. Differences in survival curves were analyzed using the Log-rank test. A p-value <0.05 was considered statistically significant. When applicable, tests were two-sided and adjusted for multiple comparisons. All statistical analyses were conducted using SPSS 22 (IBM Corporation Inc., USA). A power analysis for Log Rank test was based ona 2-sided alpha level of 0.05. The genotypes ratio was based on previous studies, (19,20). Wt group was taken as reference, with a supposed survival of 12 months for the studied population (treated pancreatic cancer patients) and compared to non-wt genotypes. Power for symmetrical difference in survival (i.e. wt equally likely to be better or worse than comparison group) was calculated, given the exploratory nature of this study, with a clinically significant difference defined as 4 months (S1 Data) APC Incidence and Survival Data The APC I1307K and E1317Q polymorphisms were identified in 11 (15%) and one (1.5%) patients, respectively. All the carriers in this cohort were heterozygotes of the APC polymorphisms. There was no statistically significant difference in survival between carriers of the various APC polymorphisms and non-carriers. The median survival of patients with WT APC I1307K was 14 months, as opposed to a median of eight months in the heterozygote group (Fig 1). However, the difference was not significant (p = 0.528). Only one patient was heterozygote for the APC E1317Q polymorphism, precluding the detection of a survival difference. Interestingly, that patient survived for much longer than the median, i.e., 28 months. CD24 incidence and survival data. CD24 polymorphism 170 C!T was identified in 29 (39.7%) heterozygote patients and four (5.5%) homozygotes patients. The 1527 tgdel CD24 variant was present in seven (9.6%) heterozygote patients and one (1.4%) homozygote patient. CD24 polymorphism 1056 A!G was identified in 50 patients, 34 (46.6%) of whom were Table 2). There was no statistically significant difference in survival between carriers of the various CD24 polymorphisms and non-carriers. The median overall survival (mOS) of the CD24 polymorphism 170 C!T heterozygotes and homozygotes was 13 and 14 months, respectively, compared to 14 months for the WT group (p = 0.722, Fig 2A). The mOS of the 1527 tgdel variant was 13 months for the WT group, 21 months for the heterozygote patients, and 42 months for the single homozygote patient (p = 0.494, Fig 2B). The 34 patients who were heterozygotes for CD24 1056 A!G polymorphism had a mOS of 12 months, as opposed to 15 months in the WT group and 16 months for the homozygote patients (p = 0.685, Fig 2C). The final group to be tested, those with the 1626 A!G variant, also failed to show any survival differences in comparison to the mOS of 15, 13 and 16 months in WT, heterozygote and homozygote patients, respectively (p = 0.834, Fig 2D). Genetic Permutations We identified three genotypes which recurred in our cohort with high incidence. Genotype 1 (10/73 patients, 14% of the cohort) was WT to all six polymorphisms tested, with the exception of CD24 1626 A!G heterozygosity. Genotype 2 (10/73 patients, 14%) was WT to both APC variants and CD24 1527 tgdel , and heterozygote to the remaining three CD24 polymorphisms. The third group, genotype 3 (9/73 patients, 10%) was heterozygote to the CD24 170 C!T and 1056 A!G polymorphisms and WT to the rest. There was no statistical difference in survival between the patients with the three different genotypes and the rest of the cohort (p = 0.440). At the time of diagnosis, the median age of the 73 patients was 64 years, and there was no statistically significant difference between the various APC and CD24 polymorphisms among them. However, when assessing the three dominant genotypes, the age at diagnosis was 58.5 years for genotype 1, 60 years for genotype 2, and 74 years for genotype 3, which was statistically significant compared to the rest of the cohort (p = 0.041, Fig 3). Comparison of additional clinical characteristics, such as response to first-line chemotherapy (p = 0.27), time until disease progression (p = 0.536) and tumor grade (p = 0.782) revealed that none was significantly different between the groups. Discussion Ectopic expression of CD24 in tumor cells increases proliferation, promotes tumor cell adhesion to fibronectin, laminin and collagen I, IV (as well as P-selectin), and contributes to greater cell motility and invasiveness [5,21]. Advances in recent research have linked CD24 to progression and malignancy of several epithelial and hematopoietic tumors. High expression of this protein was reported as being an independent predictor of poor survival in lung cancer [22][23]. Cytoplasmic staining for CD24 protein was absent in normal epithelium in ovarian cancer, and present in ovarian adenocarcinoma, and its overexpression was independently linked to a worse survival [24][25]. The same correlation was demonstrated in breast [26] and prostate cancers [27][28]. CD24 has an important oncogenic role in gastrointestinal tumors. Its overexpression was independently linked to poor prognosis in esophageal cancer [29][30]. Positive immunohistochemical staining for cytoplasmic CD24 was related to more advance staging in patients with gastric cancer [31]. A recent study showed that colon mucosa has an increased expression of CD24, even at an early stage of malignant transformation [21]. CD24 overexpression was correlated to decreased survival in colorectal cancer [32]. Sagiv et al. demonstrated a role of CD24 in the carcinogenetic process in pancreatic cancer [33]. In their study, human Colo357 pancreatic adenocarcinoma cells were treated in vitro with anti-CD24 mABs, which resulted in the arrest of cell growth in a dose-and time-dependent manner, while the cells negative to CD24 expression were not similarly affected. Anti-CD24 mABs were also effective in reducing tumor growth of bxpc3 pancreatic cancer xenografts in mice [34]. Pancreatic and colon adenocarcinoma cells treated in vitro with shRNA reportedly downregulated CD24 expression and had impaired cell growth and motility [11]. The current study sought an association between four different CD24 SNPs, two APC genetic variants, and the clinical course of pancreatic cancer. To the best of our knowledge, this is the first report on an investigation of these associations. There was no difference in the survival of heterozygous and homozygous patients, possibly due to the relatively small size of the cohort. Moreover, age at diagnosis, response to treatment, and time until disease progression also appeared to have no prognostic value in this setting. The incidence of the four different CD24 SNPs in our patient population was similar to its incidence among the healthy subjects described by our group in an earlier work [19], with the only significant exception having been found in the incidence of CD24 1626 A!G : 29% of the healthy controls in that study were heterozygotes, compared to 71.2% of the PDA patients in the current work. We had investigated the incidence of APC polymorphisms I1307K and E1317Q in healthy subjects in the past [20]. The incidence of E1317Q variant was 1.2% (twelve subjects out of 1000 screened), which is similar to our present result of 1.4% in PDA patients. The I1307K variant, however, was identified in only 5.3% healthy subjects, as compared with 15% in PDA patients. Interestingly, out of many different genotype permutations, three stood out as being predominant in 14%, 14% and 10% of the patients. Two of the three genotypes, heterozygosity to CD24 1626 A!G (WT to the rest), and heterozygosity to CD24 170 C!T , 1626 A!G , 1527 tgdel (WT to the rest) were diagnosed at a significantly younger age than the rest of the group (median of 58.5 and 60 years compared with 64 years, respectively). The third genotype, the one with heterozygosity to CD24 170 C!T and 1056 A!G and WT to the rest, was diagnosed at a much older age (74 years). Conclusion Although we were unable to detect any impact of APC and CD24 genes on survival in patients with PDA, the data shown here warrant further study with regard to their role in pancreatic carcinogenesis and prognosis, and in a larger cohort.
3,962
2015-09-22T00:00:00.000
[ "Medicine", "Biology" ]
AG-LRTR: An Adaptive and Generic Low-rank Tensor-based Recovery for IIoT Network Traffic Factors Denoising The ubiquitous 5G-enable industrial Internet of Things interconnects a great number of intelligent sensors and actors. Network management becomes challenging due to massive traffic data generated by industrial equipment. However, the conventional single traffic factor is insufficient for the increasingly complicated network engineering tasks due to the poor representation capability. Besides, the insecure equipment with open communication access easily brings irregular network fluctuations to network traffic which interferes with the primary traffic factor. The simple and interfered traffic factor decreases the network management efficiency and misleads the operators. Motivated by that, we construct a comprehensive tensor model representing multi-dimension traffic factors to describe the network traffic beneficial characteristics. Meanwhile, an adaptive and generic low-rank tensor recovery (AG-LRTR) algorithm in the tensor singular value decomposition (t-SVD) framework is proposed for denoising. For effective tensor recovery, the alternating direction method of multipliers is employed to theoretically solve the partial augmented Lagrangian function of our objective with a closed-form solution. Numerical experiments on both synthetic data and real-world traffic data in IIoT validate that our proposed algorithm outperforms other state-of-the-art of tensor recovery algorithms. I. INTRODUCTION T HE Industrial Internet of Things (IIoT) is rapidly developing with the emerging 5G-enable communication technology. The 5G communication technique provides coinstantaneous broadband access [1], and thus the IIoT network expands the scope of various intelligent equipment extensively. Effective network management becomes increasingly critical for operators. However, with the tremendous growth of traffic data, one challenge is to abstract the beneficial characteristics from the massive traffic data. Nowadays, the traffic volume is still the basic factor in the IIoT network management [2], [3] and only reflects the flow-based correlations in successive slots. It loses the other intrinsic correlations in the traffic data, such as the packet-based correlation. For example, the packet interval arrival time is preferable to the routing program than traffic volume. The single factor model is insufficient to more complicated network engineering tasks gradually. Besides, another challenge is the irregular network fluctuations due to the insecure equipment with open communication access. Parts of the network engineering tasks are susceptible to noise interference, so the irregular network fluctuations degenerate network management efficiency. For example, the network intrinsic resilient capacity should have avoided route reprogramming in instant network congestion, but this fluctuation is likely to cause once redundant reprogramming. To enhance the representation capability and improve the network management efficiency, a new network traffic model is needed to represent the valuable factors, and precise denoising is essential to reduce unnecessary operations. The supervisory control and data acquisition (SCADA) system is the central platform in the industry for aggregating and coordinating the network traffic transaction from edge equipment. The irregular network fluctuations are caused by network congestion, equipment failure or operator error as Fig. 1 shows. Many researchers have proposed network traffic modeling and denoising algorithms. The vector [4], [5] ,matrix [6], [7] and tensor [8] models are proposed respectively to recover the original traffic volume in which the tensor-based model outperformed the other models. As mentioned above, the volume only represents one single traffic factor, and the model representation capability is deficient. Inspired by the tensor recovery performance, we naturally choose the tensor to construct a novel traffic factor model. The essence of successful recovery is the low-rank property due to the multiple types of correlations in the traffic. Tensor recovery aims to realize low-rank tensor approximation from the noisy traffic factors. The regular optimization object of tensor recovery is to minimize the sum of the tensor nuclear norm and the reconstruction error. The same amplitude shrinkage for the tensor nuclear norm introduces additional bias and variance by the thresholding estimator in the recovery procedures. The reconstruction error only assumes the noise satisfies the zero-mean and ignores the non-zero mean influence. Both deficiencies result in the sub-optimal solution for the regular optimization object. To improve the optimization performance, our tensor recovery strategies in this paper are summarized as follows: 1) To enhance the tensor representation capability, we construct a compact and comprehensive traffic tensor model with ten factors: traffic volume, packet number, inter-arrival time (IAT), etc. Moreover, we further reveal that such a tensor satisfies the low-rank property. 2) We propose an adaptive and generic optimization object for precise denoising by minimizing the sum of the weighted tensor nuclear norm and the noise variance. The weighted tensor nuclear and generic noise Frobenius norm respectively alleviate the influence of the estimator and the non-zero mean noise. 3) The adaptive and generic optimization object is solved by the alternating direction method of multipliers (ADMM) algorithm. Each optimization procedure has a closed-form solution in ADMM. We further perform numerical experiments on synthetic data and a real SCADA traffic dataset as an IIoT example to validate the effectiveness of our algorithm. The rest of this paper is structured as follows. Section II introduces the preliminaries of an effective low-rank tensor recovery procedure and the tensor singular value decomposition (t-SVD) framework. Section III details the tensor model, adaptive nuclear norm formulation, and solver algorithm for ADMM. Numerical experiments conducted on synthetic and real-world data are presented in Section IV, and we conclude this work in Section V. II. PRELIMINARY OF TENSOR RECOVERY Tensor recovery is realized on the basis of the tensor lowrank property and decomposition approach. The low-rank property is embedded in the structure of entity arrangement, and the decomposition approach affects the recovery performance. This section summarizes an effective low-rank tensor recovery procedure and then introduces the tensor singular value decomposition framework involved in this paper. A. TENSOR RECOVERY Multi-dimensional data is becoming prevalent in many areas, such as computer vision [9], [10] and information science [11]. Tensor as a multi-dimensional extension of the matrix is a natural choice in these cases and has the capability of capturing these underlying multi-linear structures. Although often residing in extremely high-dimensional spaces, the tensor of interest is frequently of low rank, or approximately so [12]. Lying at the core of high-dimensional data analysis, tensor decomposition serves as a valuable tool for revealing when a tensor can be modeled as lying close to a lowdimensional subspace [13]. As for data analysis by tensor decomposition, the first step is to construct an appropriate tensor that could contain intrinsic correlations in the data. Except for some kinds of natural tensor data, such as the hyperspectral data, the seismic data, or the colorful picture, other data need to be rearranged as some regulation based on the intrinsic correlation. For example, in some applications for video background model in [9], [14], the original 4-dimensions video data was reshaped to a 3-dimension tensor through matricization of the colorful frame along the time mode. However it would degenerate the performance due to the information loss of frame matricization. Furthermore, the IT traffic tensor constructed in [8] gains periodic pattern in addition to temporal stability and spatial correlation only for traffic volume. Although the tensor entries can be substituted by other network traffic factors, this model intrinsically ignores the multiple correlations between the factors and could not represent the complete traffic properties. Therefore, the constructed tensor model should contain the necessary correlations as much as possible. Then the second step is to select a practical tensor decomposition approach to approximate the low-rank tensor. The two most popular tensor decomposition approaches, namely CANDECOMP/PARAFAC(CP) [15], [16] and Tucker de-composition [17] are known that the truncated CP or truncated Tucker is not the best low-rank approximation [18]. Compared with them, tensor singular value decomposition(t-SVD) [19] is based on the tensor-tensor product operator [20] and the calculation procedure needs to transform a tensor from the original domain to the Fourier domain along a fixed mode by discrete Fourier transform(DFT).As a newly emerged tensor decomposition paradigm, it has several properties similar to the traditional matrix SVD and decomposes any tensor with less prior information, so it is the optimality of the truncated t-SVD for data approximation. Therefore in this paper, the task of IIoT traffic denoising lies in the t-SVD framework. The last and most important step is defining the tensor rank based on the tensor decomposition approach and solving the optimization object by corresponding rank relaxation as the tensor nuclear norm. In the t-SVD framework, the basic tensor rank is called tubal rank [20] which is defined as the number of nonzero singular tubes in the original domain and relaxed by the sum of all singular values in the Fourier domain. Due to the lack of considering the relations of the singular values in the original and Fourier domain, the basic algebra calculation is more complex and computational. For simplicity and elegance, the average tubal rank is defined as the mean rank of the block circulant matrix in [14]. Its rank relaxation is to sum the singular values of the first frontal slice in the original domain. It is rigorously deduced theoretically as a new tensor nuclear norm and has similar theorems with the matrix SVD. The other approaches are either extension [10], [21] or combination [22] underlain by the rank definition and relaxation in [20] or [14]. In this paper, we have the same rank definition and relaxation with [14], because the adaptive coefficients to shrink the nuclear norm needs to be calculated by the unique singular values. B. TENSOR SINGULAR VALUE DECOMPOSITION T-SVD operation as an extension to matrix SVD is based on the tensor-tensor product(t-product). For simplicity, we mainly introduce the correlation between the original domain and the Fourier domain caused by DFT. The basic definitions related to the t-SVD framework are given in Appendix. (1) where f old(·), bcirc(·), unf old(·) denote the fold,block circulant and unfold operation for a tensor respectively and A, B, C is block diagonal matrix of A, B, C. In the original domain, a 3-mode tensor can be regarded as a matrix, with each entry being a tube that lies in the third mode. Thus, the tproduct is analogous to the matrix multiplication, except that the circular convolution replaces the multiplication operation between the entries. In the Fourier domain, the t-product is equivalent to the matrix multiplication. The t-product enjoys many similar properties to the matrix-matrix product. Then for any 3-mode tensors, A ∈ R n1×n2×n3 the t-SVD is defined in the original and Fourier domain as follows: where U ∈ R n1×n1×n3 , V ∈ R n2×n2×n3 are orthogonal, and S ∈ R n1×n2×n3 is an f-diagonal tensor. See Fig. 2 for an intuitive illustration of the t-SVD operation. FIGURE 2. An illustration of tensor singular value decomposition In this paper, we refer to the rank definition and relaxation in [14]. The entries on the diagonal of the first frontal slice S(:, :, 1) have the decreasing property as follow where n ′ = min(n 1 , n 2 ). It holds since the inverse DFT gives and the entries on the diagonal of S(:, :, j) are the singular values of A(:, :, j), so the tensor tubal rank is determined by the first frontal slice S(:, :, 1) and equivalent to the number of non-zero singular values of A. Based on the above properties, tensor average rank defined in [14] is the slice mean of the total rank in Fourier domain as rank a (A) = 1 n3 rank(A) and proved that the low average rank assumption is weaker than the low Tucker rank and low CP rank assumption, so it is more convenient to decompose a low rank tensor. Then the relaxation of tensor average rank can be rigorously deduced as summation of the singular values in the first frontal slice S(:, :, 1) and denoted as The discrimination between the tubal and average rank is the coefficient 1 n3 that is crucial to guarantee the convex envelope of the average tensor rank in a specific scope. Therefore, adopting an adaptive strategy for IIoT traffic denoising is possible based on the above-defined tensor nuclear norm. III. TENSOR-BASED MODELING AND DENOISING FOR IIOT NETWORK TRAFFIC To improve the IIoT network management efficiency, we conduct a novel tensor model with ten traffic factors based on the periodical transaction mechanism to enhance the representation capability firstly. We further validate that such a VOLUME 4, 2016 representation model has a low-rank property that underlies the effective denoising. Then an adaptive and generic optimization object is reformulated to improve the denoising performance, and finally a closed-form ADMM algorithm is proposed to solve the object. A. TENSOR MODELLING AND LOW-RANK ANALYSIS It is impossible to analyze the per packet due to the massive amount of traffic data in the industrial internet of things. Effective network management relies on multiple statistical factors which could represent the complete data exchange. In IIoT, SCADA systems are the central platform and automatically coordinate and manage the equipment actions to ensure that the infrastructure operates correctly and safely. The primary transaction mechanism [23] is polling field information and sending corresponding control commands periodically, as Fig. 3 shows. The acquisition and control data are transmitted at the determined time of one period. In addition, the same equipment category has stable operation logic and generates similar traffic data. Therefore, the total network traffic data can be characterized by periodic throughput patterns, clear statistics of packet size, predictable flow direction, and expected connection lifetime [24]. However, irregular network fluctuations will be caused by package loss, data delay or retransmission and payload changes as depicted in Fig. 3. These fluctuations finally reflect the variances in traffic volume, packet number, and packet interval arrival time (IAT). To enhance the representation capability, ten statistical factors listed in Table 1 are calculated periodically by the way provided in [25]. These factors are sufficient to represent the data exchange at the flow-based and packet-based levels and can be applied in most network engineering tasks. In all factors, the traffic volume and packet IAT contain four factors: maximum value, minimum value, mean value, and variance. Therefore, a novel low-rank tensor model can be constructed based on the factors to represent the beneficial characteristics of IIoT network traffic. In this paper, a testbed SCADA traffic data named Electrical Power and Intelligent Control (EPIC) [26] is used as the real IIoT network traffic dataset, and it mimics a real-world power system in small scale smart-grid. This SCADA system interacts with six categories of equipment which are access point(AP), programmable logic controller(PLC), intelligent electronic IAT max Maximum inter-arrival time in a time slot 10 IAT min Minimum inter-arrival time in a time slot equipment(IED), switch(SW), history database(HIST), firewall(FW), and the others. Obviously, the most prominent traffic volumes are periodic as Fig. 4(a) shows, and the least common multiple periods can be set to 30s with the 1s time slot. For each traffic factor, we calculate each statistical value per second as a row vector and continuously repeat 30 times to form a factor matrix as the frontal slice, then stack the ten factor frontal slices along the third mode to construct a traffic tensor model with the size of 30 × 30 × 10 as Fig. 4(b) shows. We decompose the traffic tensor of each category of equipment by t-SVD and illustrate the tensor singular values of the first frontal slice as Fig. 5 shows. If the traffic has the same period, such as AP, IED, SW and FW, most of the singular values of the traffic tensor are relatively small, which means the optimum low rank. However, when the traffic is random as HIST or with various periods as PLC, the low-rank property is relatively weak due to the uniform dispersion in the Fourier domain caused by random. As for sparse traffic such as the others, it has the worst low rank due to the independent and poor intrinsic correlation. Furthermore, we decompose the sum of all traffic, and the singular values are depicted as Fig. 6. Although the summation consists of different kinds of traffic, it still has the property of low rank, which means the periodical traffics data is overwhelming. Our proposed tensor model is capable to capture the intrinsic correlations in various aspects. In the following paper, the IIoT traffic factor tensor is deemed to contain all traffic by default. B. ADAPTIVE AND GENERAL OPTIMIZATION OBJECT The regular optimization object of the low-rank tensor recovery from the noisy measurements is to minimize the sum of tensor rank and reconstruction error as follows: where λ denotes the penalty coefficient. Because the rank operator is non-convex, the nuclear norm || · || * as the tightest relaxation of rank replaces the first term in (5). In the paper, we define the tensor rank as the average rank and 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 Index of Singular Value Two terms of the above optimization object have a deficiency in the optimization process. As for the first term r i=1 S(i, i, 1), the solver involving soft-thresholding shrink would cause some unavoidable biases [27], so the variance of the estimated tensor would be smaller than the original tensor when equally shrinking every singular value. Conspired by the adaptive nuclear norm for low-rank matrix approximation in [28], we can extend it to the tensor model by assigning weights to the singular values of a tensor. Another term ||N || 2 F assumes that the noise has zero mean and ignores the common non-zero mean situation, which leads to the sub-optimal solution. Assuming the actual mean µ and varianceδ 2 of noise are unknown and µ is a variable representing noise's means distribution, the variance of N − µI µ can be expressed as var(N − I µ ) = var(N ) + var(I µ ) where I µ is the tensor with the same size of N and its all entries are µ. The variance of tensor can be calculated by the Frobenius norm || · || 2 F . If µ ̸ =μ, the unbiased variance will be larger thanδ 2 as the simple proof below where N = n 1 × n 2 × n 3 − 1 denotes the element number of tensor: Then the adaptive and generic optimization object is formulated as follow: where α i denotes the i-th adaptive coefficient. The optimization object (8) can be explained to minimize the sum of the tensor adaptive nuclear norm and the noise variance. Then an adaptive tensor soft-thresholding (ATSVT) operation as a closed-form solution could solve the optimization object with the adaptive nuclear norm as follows. Theorem 1. For any λ ≥ 0, Y ∈ R n1×n2×n3 and 0 ≤ α 1 ≤ · · · ≤ α r (r = min(n 1 , n 2 )) , a global optimal solution to VOLUME 4, 2016 the optimization problem is given by the ATSVT asX : where S τ (·) denotes the adaptive soft-thresholding operation. Further, if Y has a unique t-SVD,X is the unique optimal solution. The soft-thresholding operation for tensors shrinks the singular values in the Fourier domain. Following [29], the weights can be set as some power of the singular values of the tensor, i.e., α i = 1 σi(X ) γ , where γ ≥ 0 is a predefined constant. In this way, the order constraint in the Theorem is automatically satisfied. The fact that a closed-form global minimizer for the optimization object (9) would be proved as follows based on von Neumann's trace inequality [30] and the properties of t-SVD. Proof. We first prove thatX is indeed a global optimal solution to (9). Since the weighted coefficients only depend on the singular values of X , by letting g = {g i } h i=1 = σ(X ) (which implies the entries of g are in non-increasing order), (9) can be written as: For the inner minimization, we have the inequality The last inequality is due to von Neumann's trace inequality. The equality holds when X admits the singular value decomposition X = U * Diag(g) * V * where U and V are the left and right orthogonal tensor in the t-SVD of Y. Then the optimization can be reformulated as follow The optimization object is completely separable and takes minimum when g i = (σ i (Y) − αi λ ) + . This is a feasible solution because {σ i (Y)} is in non-inceasing order, while {α i } is in non-decreasing order. Therefore, equation (14) is a global optimal solution to the objection function (10). The uniqueness follows by the equality condition for von Neumann's trace inequality when Y has a unique t-SVD, and the uniqueness of the strictly convex optimization. C. CLOSED-FORM ADMM SOLVER ALGORITHM ADMM algorithm is very efficient for some convex or nonconvex programming problems. The closed-form solution to each optimization procedure guarantees recovery performance. To solve the optimization object (8), the problem can be reformulated by the partial augmented Lagrangian function as follows, and we deduce the closed-form solutions to all formulations. where P ∈ R n1×n2×n3 is the tensor of Lagrange multipliers and β > 0 is a penalty parameter. So the variables are updated sequentially in each iteration as follows where ρ ∈ (1.0, 1.1] denotes the adjustment coefficient to accelerate the convergence speed and β 0 is a small constant. And there exists a closed-form solution for each component in (14)- (18). The term < P, M − L − N > + β 2 ||M − L − N || 2 F can be merged as β 2 ||M−L−N + P β || 2 F . For optimization problem (15) and (16), they can be rewritten as follow where the problem (20) has the same format as problem (9) and can be solved by the closed-form solution (10). For 6 VOLUME 4, 2016 This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and content may change prior to final publication. Then the problem (17) can be solved in original domain through calculating the mean of all entries as follow Finally, the pseudocode for low-rank tensor recovery from noisy measurement by adaptive nuclear norm is described in Algorithm 1 IV. PERFORMANCE EVALUATION We conduct numerical experiments to validate the efficiency of our proposed AG-LRTR in zero mean and non-zero mean noise interference scenarios. Three experiments on the synthetic data are about to analyze the noise influence, evaluate the recovery performance and discuss the parameters variation respectively. One more experiment on the EPIC SCADA traffic data is conducted for practical application. The Peak Signal-to-Noise Ratio (PSNR) is used as the metric to measure the quality of the recovery performance and it is defined as equation (24). All the experiments are conducted on a PC with a 2.9 GHz CPU and 8 GB RAM. where L denotes true ground tensor, and L denotes the recovered low-rank tensor. The larger value of PSNR corresponds to the higher quality of the results. Moreover, we compare the AG-LRTR algorithm with the other four algorithms for tensor recovery. They are abbreviated as LRMR, LRTR, G-LRTR, and TRPCA. LRMR represents the low-rank matrix recovery which needs to flatten the original tensor to a matrix and then recover the low-rank component by RPCA. The LRTR represents the low-rank tensor recovery in [31] to restore the hyperspectral image, and G-LRTR extends the LRTR only by generalizing the noise formulation. The TPRCA represents the tensor robust principal components analysis used in [14] to recover from the sparse noisy tensor. The results illustrate that our AG-LRTR algorithm outperforms the other algorithms in tensor recovery. A. SYNTHETIC DATA The synthetic tensor A ∈ R n1×n2×n3 with rank(A) = r can be generated by the tensor-tensor product directly as follows. A = Q * R (25) where Q ∈ R n1×r×n3 , R ∈ R r×n2×n3 and all entries sampled in independent and identical uniform distribution of U (0, 1). Firstly, we summarize the influence of different kinds of noise on the tensor singular values, then compare the recovery performance of the five algorithms and finally discuss the parameters set of the proposed algorithm. 1) Noise influence For compliance with the size of our proposed SCADA traffic tensor model, the size of original tensor is 30 × 30 × 10, and the rank is four in the experiment. To exploit noise influence on tensor rank, two major categories are specified: the zeromean noise and the non-zero mean noise. For zero mean noise, the noise with various variances ranging from 1 to 10 is added to the original tensor as Fig. 7 (a) shows. Because the non-zero mean noise can be divided as the sum of a zeromean noise and a constant value, only the influence of the constant value is depicted in the right picture of Fig. 7 (b). As we can see, no matter whether the mean of noise is zero or non-zero, the randomness property of noise increases all the singular values of the tensor in a similar tendency and the increment is positively correlated with the variance of the noise. As for the non-zero mean noise, the constant mean only affects the largest singular value, as proved as follows. Moreover, the discrepancy between the original tensor and noisy tensor is diverse from each singular value as Fig. 8 shows. The influence of noise with different variances on each singular value of the original tensor differs, so the adaptive nuclear norm could shrink each singular value to a different extent, thus improves recovery performance. Proof. Assume a low rank tensor L ∈ R n1×n2×n3 and a constant tensor I µ ∈ R n1×n2×n3 , then representation of the tensor M = L + I µ in Fourier domain can be calculated as As for the second term in equation (26), all entries of the tensor I 1 are 1 and its block circulant matrix is an onesmatrix in which all entries are equal to 1. Then any frontal slices in Fourier domain only have the same value as follow where F n3 (i, :) denotes the i-th row vector, 1 is an onesvector and I 1 denotes an ones-matrix with size n 1 × n 2 . So each frontal slice is a rank one matrix and has only one singular value and its left and right singular vectors are onesvector with size n 1 and n 2 . Because the first row and column in F n3 is ones-vector, both the L (i) and I 1 (i) have the same singular vectors and I 1 (i) is in the subspace of the L (i) . Then the constant tensor only affects the largest singular value of the original tensor. 2) Recovery comparison A low-rank tensor with size 30 × 30 × 10 is generated as the original and interfered with different noise as the measurement to comprehensively compare the recovery performance,. The results of recovery performance are listed in Table 2. We can see that our proposed AG-LRTR outperforms other algorithms in all cases. Especially when the noise has low variance, the AG-LRTR could improve recovery performance effectively because low variance on small singular values is much greater than on large singular values, as Fig. 8 shows. The adaptive nuclear norm could retain the information of the original tensor in main singular values and alleviate the noise influence on others. Besides, the G-LRTR and the LRTR have similar performance, which is better than the LRMR that loses the structure information by matricization. The TRPCA has the denoising capability when the mean of noise is zero. However, it fails to recover the original tensor from non-zero noise that dissatisfies sparsity. 3) Parameter discussion The recovery performance of our proposed AG-LRTR algorithm mainly depends on the parameters of the problem (8) in which the adaptive coefficients α i , i = 1, 2, · · · , r are determined by the parameter γ, so we will discuss the influence of two core parameters λ and γ on recovery performance. Before that, we make the connection from our algorithm to other algorithms. If the parameter γ is set to 0, all adaptive coefficients are equal to 1 and the sum of weighted singular values can be regarded as the standard nuclear norm, so our algorithm degenerates to the G-LRTR. Moreover, if the mean variable of noise µ is fixed to 0, the optimization object would be the same with LRTR , then if all tensors only have one frontal slice, the tensor degenerates to a matrix and the LRTR is the same with LRMR. So based on the connection, we firstly set the parameter γ = 0 to discuss the influence 8 VOLUME 4, 2016 This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and of penalty coefficient λ as the same as the LRTR and then explore the optimal parameters (γ, λ) of AG-LRTR by the grid search strategy. In the experiments, the size of the original tensor is 30 × 30 × 10 and the rank is 3. When γ = 0, the influence of λ on recovery performance is depicted in Fig. 9 which is consistent with the intuition. The penalty coefficient is to balance the nuclear norm of the tensor and the noise variance, so as the variance increases, the penalty coefficient needs to decrease to keep the nuclear norm and the variance in the same order. As for the optimal parameters (λ, γ) of AG-LRTR, the λ depends on the γ due to the adaptive nuclear norm as Fig. 10 shows where the variance of the noise is 1. We can see that the optimal λ corresponding to the peak PSNR decreases as the γ increases because more information of the larger singular values is retained so that there needs a lower λ to reduce the noise variance and keep the balance. The other variances of the noise have the same regulation. B. TENSOR RECOVERY FOR THE REAL-WORLD DATA We have introduced the SCADA systems and constructed a tensor model for the traffic of a testbed called EPIC in Section III-A, so the recovery performances of all algorithms are compared based on the low-rank SCADA traffic tensor as Fig. 11 shows. Obviously, the rank of the SCADA traffic tensor is 3 in Fig. 6, and the largest singular value is 10 that is much smaller than the synthetic data, so the variance of noise needs to be chosen cautiously. The recovery performances for SCADA traffic are depicted when the mean of noise is 0, and the noise variance varies from 0.02 to 0.18. Our proposed AG-LRTR algorithm still outperforms other algorithms and improves much better than the observed PSNR. The performances of the LRTR and LRMR are similar, which means that the correlations between the frontal slices are weaker than the synthetic data. The TRPCA performance is the worst even if it has the ability to denoising. When the mean of noise is non-zero, the recovery performances of all algorithms have the same tendency except the TRPCA, which fails to denoising. V. CONCLUSION Based on the recently developed t-SVD, an adaptive and generic low-rank tensor recovery algorithm is proposed to recover the original traffic factor tensor from the irregular network fluctuations in IIoT scenario. We construct a novel tensor model to abstract multiple correlations from the traffic data and retain as many traffic factors as possible by the adaptive and general optimization object. Numerical experiments on the synthetic data and real-world SCADA traffic verify our algorithm is effective in denoising for network management. There are some interesting future works left. Because the increment of each singular value depends on the noise variance, the adaptive nuclear norm criterion needs to combine the information of noise with the singular values, which is ignored in this paper. Besides, the optimal parameters are short of a theoretical guarantee and found by grid search, so there needs to explore an effective approach for parameter selection and save the computational cost. Moreover, deep learning can be employed if we do not have much prior information about the original tensor and the noise. PRELIMINARY DEFINITION AND PROPERTIES OF T-SVD Firstly we briefly introduce the basics of the tensor notion and related operation. Scalars are denoted by lower-case letters such as i, j, k and vectors by bold lower-case letters such as a, b, c. Matrices are denoted by upper-case letters, e.g., X. Tensors are denoted by a calligraphic letter, e.g., X , and its entry are denoted by x i1,··· ,in for a N-mode tensor. Identity matrix with size n × n is denoted as I n . The fields of real numbers and complex numbers are denoted as R and C. For a 3-mode tensor A ∈ C n1×n2×n3 , its i-th horizontal, lateral and frontal slice are denoted as A(i, :, :), A(:, i, :) and A(:, :, i) respectively and especially for the frontal slice, it can be abbreviated by A (i) . The tube along the third mode is denoted as A(i, j, :). In the field of complex number, the complex conjugate of A is denoted as conj(A) which takes the complex conjugate of each entry of A. The inner product between A and B in C n1×n2 is defined as < A, B >= T r(A * B) where A * denotes the conjugate transpose of A and T r(·) denotes the matrix trace. The inner product of two same-sized tensor A, B ∈ R I1×I2×···×I N can be represented as the sum of the inner matrix product of each frontal slice and equals to the sum of numerical product of each entry. For a tensor A, the l 1 , infinity and Frobenius norm are defined as follow where F n is the DFT matrix defined as This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and content may change prior to final publication. where ω = e − 2πi n is a primitive n-th root of unity in which i = √ −1. Note that F n / √ n is unitary matrix. Lemma 1.1 : Given any real vector v ∈ R n , the associated v satisfied v 1 is real and conj(v i ) = v n−i+2 , i = 2, · · · , ⌊ n + 1 2 ⌋ (32) Conversely, for any given complex v ∈ C n satisfying (32), there exists a real block circulant matrix circ(v) holds To extend similar properties for tensor, some operation especially for tensor are needed as follow Operation 1.1(bdiag) : For any A ∈ C n1×n2×n3 , the block diagonal matrix is denoted as A ∈ C n1n3×n2n3 with its i-th block on the diagonal as the i-th frontal slice A (i) of A. A (2) . . . Then based on the DFT of vector, the Lemma 1.1 can be extended to the tensor as follow Lemma 1.2 : For any tensor A ∈ R n1×n2×n3 , its DFT associated as A ∈ C n1×n2×n3 satisfied (F n3 ⊗ I n1 ) · bric(A) · (F −1 n3 ⊗ I n2 ) = A where ⊗ denotes the Kronecker product and (F n3 ⊗ I n1 )/ √ n 3 is unitary. Then, we have Conversely, for any given A ∈ C n1×n2×n3 satisfying (35), there exists a real tensor A ∈ R n1×n2×n3 such that (34) holds. Moreover, the following properties of tensor DFT are used frequently: Definition 1.2(T − product) :Let A ∈ R n1×l×n3 and B ∈ R l×n2×n3 . Then the t-prodect A * B is defined to be a tensor of size n 1 × n 2 × n 3 A * B = f old(bcirc(A)) · unf old(B)) There are some other concepts on tensor extended from the matrix as follow: Definition 1.3(Conjugatetranspose) : The conjugate transpose of a tensor A ∈ C n1×n2×n3 is the tensor A * ∈ C n2×n1×n3 obtained by conjugate transposing each of the frontal slices and then reversing the order of transposed frontal slices 2 through n 3 . Definition 1.3(Identitytensor) : The identity tensor I ∈ R n×n×n3 is the tensor with its first frontal slice being the n × n identity matrix, and other frontal slices being all zeros. Definition 1.3(Orthogonaltensor) : A tensor Q ∈ R n×n×n3 is orthogonal if it satisfies Q * Q = Q * Q * = I Definition 1.3(F − diagonalTensor) : A tensor is called f-diagonal Tensor if each of its frontal slices is a diagonal matrix.
8,555.4
2022-01-01T00:00:00.000
[ "Computer Science" ]
Predictions from Generative Artificial Intelligence Models: Towards a New Benchmark in Forecasting Practice : This paper aims to determine whether there is a case for promoting a new benchmark for forecasting practice via the innovative application of generative artificial intelligence (Gen-AI) for predicting the future. Today, forecasts can be generated via Gen-AI models without the need for an in-depth understanding of forecasting theory, practice, or coding. Therefore, using three datasets, we present a comparative analysis of forecasts from Gen-AI models against forecasts from seven univariate and automated models from the forecast package in R , covering both parametric and non-parametric forecasting techniques. In some cases, we find statistically significant evidence to conclude that forecasts from Gen-AI models can outperform forecasts from popular benchmarks like seasonal ARIMA, seasonal naïve, exponential smoothing, and Theta forecasts (to name a few). Our findings also indicate that the accuracy of forecasts from Gen-AI models can vary not only based on the underlying data structure but also on the quality of prompt engineering (thus highlighting the continued importance of forecasting education), with the forecast accuracy appearing to improve at longer horizons. Therefore, we find some evidence towards promoting forecasts from Gen-AI models as benchmarks in future forecasting practice. However, at present, users are cautioned against reliability issues and Gen-AI being a black box in some cases. Introduction OpenAI succeeded in making artificial intelligence (AI) accessible to the world (n.b., it is only available to the population with access to the internet) and has demonstrated how generative AI (Gen-AI), a subset of deep learning, can transform our lives [1].As a result, since its launch in November 2022, the natural language model Chat Generative Pre-trained Transformer (ChatGPT) continues to disrupt industries across the globe [2], with other models like Microsoft Copilot and Google's Gemini emerging since. Given its popularity as the first to market, researchers have already delved into many aspects of ChatGPT, from its potential impact on education [3] and research [4] to its impact on various fields ranging from marketing [5] to forensics [6], to name a few.However, the rapid adoption of Gen-AI is also highlighting its many shortcomings, which range from hallucinations [7] to bias and ethical issues [8] and to the negative environmental impact [9,10].Furthermore, concerns about AI making certain job functions obsolete are also rapidly emerging [11].Therefore, the importance of promoting the use of AI for intelligence augmentation, i.e., enhancing human intelligence and improving the efficiency of human tasks as opposed to being a replacement, is crucial [12].In this regard, recent experimental evidence points towards an opportunity for using Gen-AI to reduce productivity inequalities [13]. A few months after ChatGPT was launched, Hassani and Silva [14] discussed the potential impact of Gen-AI on data science and related intelligence augmentation.Building Information 2024, 15, 291 2 of 17 on that work, here, we focus our attention on "forecasting", which is a common data science task that helps with capacity planning, goal setting, and anomaly detection [15].Today, Gen-AI tools offer the capability for non-experts to generate forecasts and use these in their decision-making processes.Nvidia's CEO Jensen Huang recently predicted the death of coding in a world where "the programming language is human, [and] everybody in the world is now a programmer" [16]. In a world where humans can now generate forecasts without an in-depth knowledge or understanding of forecasting theory, practice, or coding, we are motivated to determine whether there is a need to rethink forecasting practice concerning the benchmarks that are used to evaluate forecasting models.Benchmark forecasts are meant to have significant levels of accuracy and be simple to generate with minimal computational effort.It is an important aspect of forecasting practice as investments in new forecasting models should only be entertained if there is sufficient evidence of a proposed model significantly outperforming popular benchmarks.As outlined in [17], when proposing a new forecasting model or undertaking forecast evaluations for univariate time series, it is important to consider the naïve, seasonal naïve, or ARIMA model as a benchmark for comparing forecast accuracy.The random walk (i.e., the naïve range of models) is known to be a tough benchmark to outperform [18].Exponential smoothing, Holt-Winters and Theta forecasts are also identified as benchmark methods in one of the most comprehensive reviews of forecasting theory and practice [18]. Recent research confirms the superiority of AI models across various computational tasks by building on theories of deep learning, scalability, and efficiency [19,20].As discussed, and evidenced below, these computational tasks now include forecasting using historical data.Given that large language models can generate forecasts based on AI prompts, this study is grounded by the following research question: RQ: Should forecasts from Gen-AI models (for example, forecasts from ChatGPT or Microsoft Copilot) be considered a new benchmark in forecasting practice? To the best of our knowledge, there exists no published academic work that seeks to propose or evaluate forecasts from Gen-AI models as a benchmark or contender in the field of forecasting.In contrast, machine learning models have been applied and compared with statistical models for time series forecasting [21], whilst deep learning models have also received much attention in the recent past [22].Some studies propose hybrid forecasting models that combine machine learning, decomposition techniques, and statistical models and compare the performance against benchmarks like ARIMA [23].Therefore, it is evident that studies seeking to introduce benchmarks via comparative analysis of models are important.For example, in relation to machine learning, Gu et al. [24] sought to introduce a new set of benchmarks for the predictive accuracy of machine learning methods via a comparative analysis, whilst Zhou et al. [25] presented a comparison of deep learning models for equity premium forecasting.Gen-AI models, given their reliance on deep learning, can extract and transform features from data and identify hidden nonlinear relations without the need to rely on econometric assumptions and human expertise [25]. Therefore, our interest lies in conducting a comparative analysis of forecasts from Gen-AI models in comparison to forecasts generated by established, traditional benchmark forecasting models to determine whether there is sufficient evidence to promote a new benchmark model for forecasting practice in the age of Gen-AI.In this paper, initially, we consider ChatGPT as an example of a Gen-AI tool and use it to forecast three time series, as an example.These include the U.S. accidental death series [26][27][28], the air passengers series [29] and UK tourist arrivals [30,31].The forecasts from ChatGPT are compared with seven forecasting models which represent both parametric and non-parametric forecasting techniques and are provided via the forecast package in R [32].These include seasonal naïve (SNAIVE), Holt-Winters (HW), autoregressive integrated moving average (ARIMA), exponential smoothing (ETS), trigonometric seasonality, Box-Cox transformation, ARMA errors, trend and seasonal components (TBATS), seasonal-trend decomposition using LOESS (STL), and the Theta method.Models such as SNAIVE, ARIMA, ETS, Theta, and HW are identified as benchmark forecasting models in [17,18], whilst the rest have the shared properties of being automated, simple, and applicable with minimum computational effort without the need for an in-depth understanding of forecasting theory.However, unlike with Gen-AI models, the application of these selected benchmarks will require some basic coding skills and an understanding of the use of the programming language R. Through the empirical analysis, we find that in some cases, forecasts from Gen-AI models can significantly outperform forecasts from popular benchmarks.Therefore, we find evidence for promoting the use of Gen-AI models as benchmarks in future forecast evaluations.However, our findings also indicate that the accuracy of these forecasts could vary depending on the underlying data structures, the level of forecasting knowledge, and education, which will invariably influence the quality of prompt engineering and the training data underlying the Gen-AI model (e.g., paid vs. free versions).Reliability-related issues are also prevalent, alongside Gen-AI models being black boxes and thus restricting interpretability of the models being used. Through our research, we make several contributions to forecasting practice and the literature.First, we present the most comprehensive evaluation of forecasts from Gen-AI models to date, comparing them to seven traditional benchmark methods.Second, based on our findings, we can propose the use of Gen-AI models as benchmark forecasting models for forecast evaluations.In doing so, we add to the list of historical benchmark forecasting models in [18], which tend to require basic programming and coding skills.Third, our research also seeks to educate and improve the basic forecasting capabilities of the public by sharing the coding used to generate competing forecasts via the forecast package in R. Finally, through the discussion, we also seek to improve the public understanding and capability of engaging with Gen-AI models for forecasting; we share the prompts used on Microsoft Copilot that resulted in a forecast for one of the datasets. The remainder of this paper is organized such that Section 2 briefly introduces the forecasting models with the codes used to generate the forecasts, Section 3 presents the forecasting results and analysis, a discussion follows in Section 4, and the paper concludes in Section 5. Methods and Data In this section, we present the benchmark forecasting models, the data, and the metrics used to evaluate forecasts.The forecasts from Gen-AI models were generated by prompting ChatGPT (GPT-4), unless mentioned otherwise.It is noteworthy that we do not attempt to define the models that are generated by ChatGPT as these were not known in advance of the application. Benchmark Forecasting Models All forecasts were generated using the forecast package in R (v.4.3.1)[32].To minimize replication of information already in the public domain, we present concise summaries of each model and the code used to generate the benchmark forecasts to enable replication. Holt-Winters (HW) Forecasts from the Holt-Winters model [33,34] were generated via the forecast package in R using the following code. Autoregressive Integrated Moving Average (ARIMA) The ARIMA model is one of the most established and widely used time series forecasting models [35]. The modeling process seeks to separate the signal and noise by adopting past observations and taking into consideration the degree of differencing, autoregressive, and moving average components.The "auto.arima" model from the forecast package in R begins by repeating KPSS tests to determine the number of differences d.The data are then differenced d times to minimize the Akaike information criterion (AIC), and the values of p (number of autoregressive terms) and q (number of lagged forecast errors in the forecasting equation) are obtained.The algorithm is efficient as instead of considering every possible combination of p and q, it traverses the model space via a stepwise search.Thereafter, the "current model" is determined by searching the four ARIMA models, namely, ARIMA(2,d,2), ARIMA(0,d,0), ARIMA(1,d,0), and ARIMA(0,d,1), for the one which minimizes the AIC.If d = 0, then the constant c is included; if d ≥ 1, then the constant c is set to zero.The model also evaluates variations on the current model by varying p and q by ± 1 and including/excluding c.The steps following the minimization of the AIC are repeated until no lower AIC can be found.Those interested in the theory underlying this model are referred to [35].library(forecast) data<-scan() time_series<-ts(data,start=c(1973,1), frequency=12) model<-auto.arima(time_series,h=12) Exponential Smoothing (ETS) The theory underlying the ETS model is explained in [35].In brief, the ETS model evaluates over 30 possible options and considers the error, trend, and seasonal components in choosing the best exponential smoothing model by optimizing initial values and parameters using maximum likelihood estimation and selecting the best model based on the Akaike information criterion [35].library(forecast) data<-scan() time_series<-ts(data,start=c(1973,1), frequency=12) model<-ets(time_series,h=12) The TBATS model is aimed at providing accurate forecasts for time series with complex seasonality.A detailed description of the TBATS model can be found in [36].We consider this as a benchmark given its simple application, like the other models considered here, which makes it easily accessible at minimal cost.library(forecast) data<-scan() time_series<-ts(data,start=c(1973,1), frequency=12) model<-tbats(time_series,h=12) Seasonal Naïve (SNAIVE) SNAIVE is a popular benchmark model for forecasting seasonal time series data.This model returns forecasts from an ARIMA(0,0,0)(0,1,0)m model, where m is the seasonal period (m=12 in the case of our data).In the most basic terms, this model considers the historical value from the previous season to be the best forecast for this season.library(forecast) data<-scan() time_series<-ts(data,start=c(1973,1), frequency=12) model<-snaive(time_series,h=12) Seasonal-Trend Decomposition Using LOESS (STL) The theory underlying the STL model can be found in [35], where the authors describe this model as a robust decomposition method which uses loess for estimating non-linear relationships.This algorithm works by decomposing the time series using STL before forecasting the seasonally adjusted series and returning the reseasonalized forecasts.Once more, we use this as a benchmark given its straightforward application to any time series.library(forecast) data<-scan() time_series<-ts(data,start=c(1973,1), frequency=12) model<-stlf(time_series,h=12) Theta Forecast The final benchmark model considered in this paper is the Theta method [37], which can be described as simple exponential smoothing with drift.The series is seasonally adjusted (in the case of seasonal time series) using a classical multiplicative decomposition before the Theta method is applied.The resulting forecasts are then reseasonalized.Those interested in an alternative explanation of the theory underlying this method are referred to [38]. Following the approach taken in [39], the forecast evaluation not only relies on the root mean squared error (RMSE) and mean absolute percentage error (MAPE) as loss functions but also the mean absolute error (MAE), as in [23].MAPE values less than 10% are indicative of highly accurate forecasting, 10-20% are indicative of good forecasting, and 20-50% are indicative of reasonable forecasting, whilst 50% or more indicate inaccurate forecasting [40]. Data Below we present and summarize the two main datasets used in this paper and refer readers to recent publications that summarize the dataset introduced in the discussion section. Death Series The main analysis considers the popular monthly U.S. accidental death series from January 1973 to December 1977 [26][27][28].Figure 1 shows the death series.As is visible, it is affected by seasonal variations and a slowly decreasing trend that appears to gradually increase over time.Table 1 presents some summary statistics which describe the data.The Shapiro-Wilk test for normality is not statistically significant, thereby confirming that the series is normally distributed.During the observed period, deaths averaged 8788 per month in the U.S. The Bai and Perron [41] test for breakpoints indicates that the series was affected by a structural break in October 1973.The coefficient of variation (CV) indicates that the variation in the death series can be quantified as 11%. Figure 2 presents a seasonal plot for the death series.As visible, this shows that deaths peaked annually in July. Shapiro-Wilk test for normality is not statistically significant, thereby confirming that the series is normally distributed.During the observed period, deaths averaged 8788 per month in the U.S. The Bai and Perron [41] test for breakpoints indicates that the series was affected by a structural break in October 1973.The coefficient of variation (CV) indicates that the variation in the death series can be quantified as 11%. Figure 2 presents a seasonal plot for the death series.As visible, this shows that deaths peaked annually in July. Air Passengers Series The air passengers series [29] records monthly total U.S. air passengers from 1949 to 1960 (Figure 3) and has a different structure to the death Series.Here, there is an upwardsloping trend and seasonality which increases over time.The seasonal plot (Figure 4) Shapiro-Wilk test for normality is not statistically significant, thereby confirming tha series is normally distributed.During the observed period, deaths averaged 8788 month in the U.S. The Bai and Perron [41] test for breakpoints indicates that the series affected by a structural break in October 1973.The coefficient of variation (CV) indi that the variation in the death series can be quantified as 11%. Figure 2 presents a seas plot for the death series.As visible, this shows that deaths peaked annually in July. Air Passengers Series The air passengers series [29] records monthly total U.S. air passengers from 19 1960 (Figure 3) and has a different structure to the death Series.Here, there is an upw sloping trend and seasonality which increases over time.The seasonal plot (Figu Air Passengers Series The air passengers series [29] records monthly total U.S. air passengers from 1949 to 1960 (Figure 3) and has a different structure to the death Series.Here, there is an upwardsloping trend and seasonality which increases over time.The seasonal plot (Figure 4) indicates that most air passengers were recorded annually in July.The descriptive statistics in Table 2 further evidence the differences between the two datasets being used.The Shapiro-Wilk test for normality is statistically significant and thereby indicates that the time series is not normally distributed.This indicates that the median air passengers value of 266 is a more appropriate measure of central tendency for this data.The Bai and Perron [41] breakpoints test concludes that five breakpoints are impacting this time series, and the CV of 43% confirms that this series reports more variation than the death series (CV = 11%). Application: Death Series At the initial stage, our intention is not to determine whether one forecasting model is significantly better than a competing forecasting model.Instead, our quest is modest in Application: Death Series At the initial stage, our intention is not to determine whether one forecasting model is significantly better than a competing forecasting model.Instead, our quest is modest in Application: Death Series At the initial stage, our intention is not to determine whether one forecasting model is significantly better than a competing forecasting model.Instead, our quest is modest in that we are seeking to identify whether there is a case for promoting the use of forecasts from Gen-AI models (i.e., ChatGPT in this example) as a benchmark forecasting model in the future. To evaluate this proposition, we set up a forecasting exercise whereby observations from January 1973 up until December 1977 were set aside for training our models.We then generated a h = 12-months-ahead forecast over the test period from January 1978 to December 1978.The findings are presented below. First and foremost, we were able to prompt ChatGPT into producing forecasts for this dataset from seven different models that included ARIMA, seasonal ARIMA (SARIMA), a non-parametric model, and a long short-term memory (LSTM) model.It is important to note that these models are not defined in Section 2 because we could not foresee which models Gen-AI would rely on.Furthermore, the key point of this research is to determine whether forecasts from Gen-AI, regardless of which forecasting model it might be using, should be considered as a benchmark model in future forecasting studies.Figure 5 shows the forecasts from these various ChatGPT-based models plotted against the actual data so that readers can visualize and compare the loss functions reported in Table 3.One clear message from Figure 5 is that the LSTM model generated by ChatGPT is performing very poorly at forecasting the death series. from January 1973 up until December 1977 were set aside for training our models.We then generated a h = 12-months-ahead forecast over the test period from January 1978 to December 1978.The findings are presented below. First and foremost, we were able to prompt ChatGPT into producing forecasts for this dataset from seven different models that included ARIMA, seasonal ARIMA (SARIMA), a non-parametric model, and a long short-term memory (LSTM) model.It is important to note that these models are not defined in Section 2 because we could not foresee which models Gen-AI would rely on.Furthermore, the key point of this research is to determine whether forecasts from Gen-AI, regardless of which forecasting model it might be using, should be considered as a benchmark model in future forecasting studies.Figure 5 shows the forecasts from these various ChatGPT-based models plotted against the actual data so that readers can visualize and compare the loss functions reported in Table 3.One clear message from Figure 5 is that the LSTM model generated by ChatGPT is performing very poorly at forecasting the death series.Table 3 presents the forecast errors for each of these models.Based on the RMSE, ChatGPT: 2 forecasts are the most accurate, whilst based on the MAPE and MAE criteria, and ChatGPT: 1 forecasts would be considered most accurate.Therefore, if one were to rely on ChatGPT for forecasting the death series, then either of these models could be considered appropriate.A further evaluation of MAPE based on the guidance in [40] uncovers that six out of the seven ChatGPT forecasts can be regarded as highly accurate as they report MAPE values of less than 10%.This is significant as all these forecasts were generated by simply amending the prompts on ChatGPT.This gives an early indication of the disruptive potential of ChatGPT as a forecasting model for the future.Through further prompting, we were able to uncover that ChatGPT: 1 was an ARIMA (5,1,0) model, whilst ChatGPT: 2 was an ARIMA(5,1,2)(1,1,1) model.Table 3 presents the forecast errors for each of these models.Based on the RMSE, ChatGPT: 2 forecasts are the most accurate, whilst based on the MAPE and MAE criteria, and ChatGPT: 1 forecasts would be considered most accurate.Therefore, if one were to rely on ChatGPT for forecasting the death series, then either of these models could be considered appropriate.A further evaluation of MAPE based on the guidance in [40] uncovers that six out of the seven ChatGPT forecasts can be regarded as highly accurate as they report MAPE values of less than 10%.This is significant as all these forecasts were generated by simply amending the prompts on ChatGPT.This gives an early indication of the disruptive potential of ChatGPT as a forecasting model for the future.Through further prompting, we were able to uncover that ChatGPT: 1 was an ARIMA (5,1,0) model, whilst ChatGPT: 2 was an ARIMA(5,1,2)(1,1,1) model. Next, we calculated forecasts for the death series using the benchmark models identified in Section 2. The out-of-sample forecasting errors are reported in Table 4.As is visible, forecasts from ChatGPT (i.e., ChatGPT: 1 and ChatGPT: 2) for the death series were only able to outperform forecasts from HW, whilst the rest of the benchmark models were seen outperforming the best two forecasts from the Gen-AI model by large margins.However, the fact that two of the forecasts from ChatGPT were able to report a lower RMSE and MAPE than forecasts from HW indicates that further exploration of ChatGPT as a benchmark forecasting model is worthwhile, especially because HW is regarded as a current benchmark model in [18].Figure 6 plots the forecasts from ChatGPT: 2 against the forecasts from HW as an example.Interestingly, this indicates that as the series peaks, the ChatGPT forecast overestimates the number of deaths but aligns better as the trough sets in.In contrast, the HW forecasts are seen providing relatively better predictions as the series peaks but worsening significantly once the trough sets in.The visualization indicates that the quality of the forecast from ChatGPT for this time series improves as the forecasting horizon increases.This is interesting as the forecasting accuracy generally worsens as the horizon increases. were seen outperforming the best two forecasts from the Gen-AI model by large margins.However, the fact that two of the forecasts from ChatGPT were able to report a lower RMSE and MAPE than forecasts from HW indicates that further exploration of ChatGPT as a benchmark forecasting model is worthwhile, especially because HW is regarded as a current benchmark model in [18].Figure 6 plots the forecasts from ChatGPT: 2 against the forecasts from HW as an example.Interestingly, this indicates that as the series peaks, the ChatGPT forecast overestimates the number of deaths but aligns better as the trough sets in.In contrast, the HW forecasts are seen providing relatively better predictions as the series peaks but worsening significantly once the trough sets in.The visualization indicates that the quality of the forecast from ChatGPT for this time series improves as the forecasting horizon increases.This is interesting as the forecasting accuracy generally worsens as the horizon increases.Whilst at this stage, the findings remain inconclusive in relation to the RQ, the results in Table 3 confirm that ChatGPT does have the capability of producing highly accurate forecasts as per the MAPE evaluation criteria set by Chen et al. in [40].Furthermore, ChatGPT outperforming HW forecasts based on all loss functions was also a positive sign. Application: Air Passengers Series To evaluate our proposition of ChatGPT as a benchmark model further, next, we consider a forecasting exercise using the air passenger series.In this case, we evaluate Whilst at this stage, the findings remain inconclusive in relation to the RQ, the results in Table 3 confirm that ChatGPT does have the capability of producing highly accurate forecasts as per the MAPE evaluation criteria set by Chen et al. in [40].Furthermore, ChatGPT outperforming HW forecasts based on all loss functions was also a positive sign. Application: Air Passengers Series To evaluate our proposition of ChatGPT as a benchmark model further, next, we consider a forecasting exercise using the air passenger series.In this case, we evaluate forecasting the data at longer horizons of both h = 12 and h = 24 months ahead.Forecasts from ChatGPT are compared against the same benchmark models.Table 5 reports the out-of-sample forecasting results. At h = 12 steps ahead, based on all three loss functions, forecasts from HW report the lowest errors.The ChatGPT forecast also appears to be performing well as it reports errors that are closer to the errors from the HW forecast in comparison to the errors generated by the other models.Interestingly, the best ChatGPT forecast was attained via a HW model.Given that the HW forecast in R required some basic programming and coding knowledge, whilst the ChatGPT-led HW forecast was attainable simply by prompting ChatGPT, this indicates an important message for the future of forecasting practice in terms of the potential of Gen-AI models in improving the accessibility of forecasting.This is further augmented when we compare the ChatGPT forecasting results with the rest of the benchmarks.The findings show that ChatGPT forecasts were able to outperform forecasts from ARIMA, ETS, TBATS, SNAIVE, STLF, and THETA based on all three criteria at this horizon.Note: h refers to the forecasting horizon.For example, h = 12 indicates that forecasts were generated over the last 12 observations of the series.Shown in bold font is the model reporting the lowest forecast error based on a given loss function. In terms of forecasting at h = 24 steps ahead, ChatGPT forecasts report slightly lower errors than HW forecasts.Note that the ChatGPT forecast reported here is also from a HW model.In this case, based on all three loss functions, we find that the ChatGPT forecast outperforms forecasts from HW, ARIMA, ETS, TBATS, SNAIVE, STLF, and THETA models.Interestingly, only the forecasts from ChatGPT and HW models reported MAPE values less than 10% and can therefore be labeled as highly accurate forecasts [40] for this series. As SNAIVE and ARIMA models are two of the most popular benchmark forecasting models [17,18], in Figure 7, we compare the ChatGPT forecasts with these benchmarks at h = 12 steps ahead, whilst in Figure 8, we do the same at h = 24 steps ahead.From Figure 7, it is visible that the ChatGPT forecast is performing much better than the forecast generated via the SNAIVE model.The ARIMA forecast and ChatGPT forecast appear closely aligned until the peak in the series, but as the trough sets in, the accuracy of the ChatGPT forecast improves.Once again, this seems to indicate that the forecasts from ChatGPT appear to perform better as the forecasting horizon increases, as was visible in Figure 6 as well.Figure 8 is clearer in evidencing that, in comparison to ARIMA and SNAIVE forecasts, at h = 24 steps ahead, the ChatGPT forecast provides a far more accurate prediction.Finally, we go a step further and test the out-of-sample forecasting errors for statistically significant differences using the Hassani-Silva [42,43] test.These results and the ratio of the RMSE (RRMSE) are reported in Table 6 below.The RRMSE is computed as follows: Finally, we go a step further and test the out-of-sample forecasting errors for statistically significant differences using the Hassani-Silva [42,43] test.These results and the ratio of the RMSE (RRMSE) are reported in Table 6 below.The RRMSE is computed as follows: is <1, then this indicates that ChatGPT forecasts are outperforming the competing forecast by 1 − ( ) ( ) * 100%.At h = 12 steps ahead, we do not find any evidence to conclude that the ChatGPT forecast is significantly better than forecasts HW, ARIMA, ETS, or TABTS for the air passengers series.However, in comparison to SNAIVE, STLF, and THETA forecasts, the ChatGPT forecasts are significantly better by 77%, 50%, and 45%, respectively.At h = 24 steps ahead, we find more conclusive evidence for promoting ChatGPT forecasts as a benchmark model because the evidence indicates that ChatGPT forecasts are significantly outperforming forecasts from ARIMA, ETS, TBATS, SNAIVE, STLF, and THETA by 52%, 51%, 49%, 54%, 44%, and 50%, respectively.At h = 12 steps ahead, we do not find any evidence to conclude that the ChatGPT forecast is significantly better than forecasts from HW, ARIMA, ETS, or TABTS for the air passengers series.However, in comparison to SNAIVE, STLF, and THETA forecasts, the ChatGPT forecasts are significantly better by 77%, 50%, and 45%, respectively.At h = 24 steps ahead, we find more conclusive evidence for promoting ChatGPT forecasts as a benchmark model because the evidence indicates that ChatGPT forecasts are significantly outperforming forecasts from ARIMA, ETS, TBATS, SNAIVE, STLF, and THETA by 52%, 51%, 49%, 54%, 44%, and 50%, respectively. Accordingly, based on the analysis presented and discussed above, we can respond to our original RQ and conclude that there is sufficient evidence to promote the use of forecasts from Gen-AI models as a new benchmark in forecasting practice.Our work also contributes to the literature on forecasting theory and practice by introducing Gen-AI models as a new and viable benchmark forecast model to complement the models identified in [18].Our findings do evidence that in some cases, forecasts from Gen-AI models can significantly outperform the other benchmarks frequently cited in forecasting literature.In the case of the air passengers series, the forecasts from Gen-AI significantly outperformed the Theta forecast too.The Theta forecast is described in [18] as a "critical benchmark". The Impact of Prompt Engineering on Forecasts from Gen-AI Models Given the findings reported above, it is important to discuss the impact of prompt engineering on forecasts from Gen-AI models.Meskó [44] defines prompt engineering as "the practice of designing, refining, and implementing prompts or instructions that guide the output of LLMs to help in various tasks."Given that Gen-AI models now have memory [45], and several researchers are studying the influence of prompting on the quality of results from Gen-AI models [46,47], we find it pertinent to use another example to demonstrate how prompting results in forecasts. We are also mindful that the results presented in Section 3 rely on the paid, premium version of ChatGPT.Therefore, in this example, we rely on the preview, free version of Microsoft Copilot.The forecasting exercise considers comparing forecasts from the "auto.arima"algorithm found in the forecast package in R with forecasts from Microsoft Copilot.We consider a 12-steps-ahead forecast February 2017-January 2018) for monthly UK tourist arrivals.This data span from January 2000 to January 2018 and were previously used in [30,31,39], where readers can find extensive descriptions of the data.The prompt used to generate the forecast can be found in Appendix A. Table 7 below reports the out-of-sample forecasting results for UK tourist arrivals.As visible, the forecast generated by the free version of Microsoft Copilot was able to outperform the ARIMA forecast by 6% and report the lowest errors across all error metrics.However, we did not find evidence of any statistically significant between the forecast errors at this horizon.Nevertheless, these findings further support our proposition towards a new benchmark in forecasting practice as forecasts from the free version of a Gen-AI model was able to outperform forecasts from "auto.arima". Gen-AI Can Forecast: So What? As our findings support the promotion of forecasts from Gen-AI models as a benchmark model in forecasting practice, we find it pertinent to comment upon the practical importance and significance of this work. First and foremost, the ability to use Gen-AI models to generate forecasts with some credible accuracy has significant implications for the population at large.For the first time in the history of mankind, humans are now able to generate a forecast for a variable of interest without the need for any formal knowledge or education in the theory underlying time series analysis and forecasting.This would further advance the adoption of forecasting across different functions, and the use of Gen-AI models for forecasting purposes would also increase over time. Second, our findings shed light on a future where humans will not necessarily need to know a programming language (e.g., R or Python) in depth to engage in forecasting practice as one would be able to prompt Gen-AI models using human language to obtain the desired results. Third, we believe a surge in the use and application of Gen-AI models for forecasting would result in a renewed demand for formal time series analysis and forecasting education that can be associated with Gen-AI and related skills.This goes back to the importance of prompt engineering and the importance of knowing the right questions with which to prompt the Gen-AI to obtain the most accurate results.Therefore, we believe that if a trend was to emerge whereby humans began exploring the use of Gen-AI for forecasting, this would be positive for the growth and development of the entire field of forecasting. Fourth, our initial findings point towards the importance of forecasting practitioners considering forecasts from Gen-AI models as a benchmark when tasked with a forecast evaluation or at the point of introducing a new forecasting approach.This aspect is further strengthened by the results in Section 3, which show that ChatGPT forecasts were significantly more accurate in some cases in comparison to some of the popular benchmarks identified in [18]. Conclusions This paper considers the potential impact of Gen-AI on a common data science task known as forecasting.We sought to answer the RQ, which focuses on whether there is any support for the adoption of forecasts from Gen-AI models as benchmarks in future forecast The ease of forecasts via prompts (as opposed to the need to understand the theory underlying forecasting or to have any prior knowledge of coding and programming), when coupled with the forecast evaluations presented herewith, provides some evidence which justifies the use of forecasts from Gen-AI models as benchmarks.It is noteworthy that in some cases, we find forecasts from ChatGPT resulting in significantly more accurate outcomes than popular and powerful benchmark models from the forecast package in R.These initial findings do indicate that before the adoption of new models (that may be costly) or complex models (that may be time-consuming), it is pertinent for stakeholders to compare their performance against forecasts attainable via Gen-AI models to determine whether there exists a statistically significant difference between the forecast errors. In terms of using Gen-AI for forecasting variables, the initial findings reported here point towards several interesting insights.First, as with all forecasting models, the application of Gen-AI models to three different datasets showed that the underlying data structures and processes could impact the accuracy of the forecasts attainable via Gen-AI models.Second, the accuracy of forecasts from Gen-AI models appear to depend largely on prompt engineering.This skill, when coupled with expert knowledge of forecasting, can result in more accurate forecasts.Third, the premium versions of Gen-AI models are likely to generate more accurate forecasts than the free versions given the vast differences in their training samples.However, it is noteworthy that the free versions too could potentially generate competitive forecasts (see, Section 4.1). There are also some limitations and drawbacks in the use of forecasts from Gen-AI models that should be considered.First, in an educational or professional setting, the ability to afford a license to access the premium version of Gen-AI models can influence the accuracy of forecasts and thus could widen inequalities and give an undue competitive advantage to those who are financially better off.Second, Gen-AI models can be black boxes.For example, the evidence reported in the Appendix A shows that the free version of Microsoft Copilot was not able to interpret the SARIMA model that was applied in Section 4.1.Third, certain versions of Gen-AI models can lack reliability, as we experienced with the free version of Microsoft Copilot.For example, as evidenced in the Appendix A, on 24th March 2024, we were able to generate forecasts for the results reported in Section 4.1 by uploading the data as a comma-separated values file onto the Gen-AI platform.However, since May 2024, the free version of Microsoft Copilot could not replicate these forecasts using the same prompts and instead gives the error: "I'm sorry for any confusion, but as an AI, I'm currently unable to directly accept files such as Excel spreadsheets." Finally, the purpose of this research was to position forecasts from Gen-AI models (like ChatGPT) as a viable benchmark model in future forecasting evaluations and practice.In doing so, we also open several directions for future research.First, there is an opportunity to develop a greater understanding of the most efficient prompting mechanism on Gen-AI models with which to obtain the most accurate forecast for a given dataset.Second, researchers should consider a more comprehensive analysis of ChatGPT as a forecasting model by applying it to a variety of datasets with different structures.Third, researchers should consider comparing forecasts from different Gen-AI models (e.g., ChatGPT vs. Microsoft Copilot vs. Gemini) to determine whether one model's forecasting capabilities are superior to the other.Finally, a more extensive forecast evaluation that compares forecasts from the forecast package in R against forecasts from Gen-AI models when faced with several datasets could yield some interesting findings with which to guide future forecasting studies. Information 2024 , 15, x FOR PEER REVIEW 11 of 18 Table 1 . Summary statistics for the death series. [41]: SD, standard deviation; IQR, interquartile range; CV, coefficient of variation.Normality reports the p-value from a Shapiro-Wilk test for normality.The breakpoints are calculated using the Bai and Perron[41]test. Table 1 . Summary statistics for the death series. [41]: SD, standard deviation; IQR, interquartile range; CV, coefficient of variation.Normality reports the p-value from a Shapiro-Wilk test for normality.The breakpoints are calculated using the Bai and Perron[41]test. Table 1 . Summary statistics for the death series. [41]: SD, standard deviation; IQR, interquartile range; CV, coefficient of variation.Norm reports the p-value from a Shapiro-Wilk test for normality.The breakpoints are calculated usin Bai and Perron[41]test. Table 3 . Out-of-sample forecasting results for the death series. Table 3 . Out-of-sample forecasting results for the death series. Note: All forecasts listed above were generated via ChatGPT.Shown in bold font is the model reporting the lowest forecast error based on a given loss function. Table 4 . Out-of-sample forecasting results from benchmark models for the death series.All forecasts listed above were generated via the forecast package in R. Shown in bold font is the model reporting the lowest forecast error based on a given loss function. Note: Table 4 . Out-of-sample forecasting results from the benchmark models for the death series.All forecasts listed above were generated via the forecast package in R. Shown in bold font is the model reporting the lowest forecast error based on a given loss function. Note: Table 5 . forecasting results for the air passengers series. Table 6 . Out-of-sample forecasting RRMSE results for the air passengers series.Figure 8. h = 24-months-ahead forecast for U.S. air passengers. Table 6 . Out-of-sample forecasting RRMSE results for the air passengers series. Table 7 . Out-of-sample forecasting results for the UK tourist arrivals series.Note: h refers to the forecasting horizon.For example, h = 12 indicates that forecasts were generated over the last 12 observations of the series.Shown in bold font is the model reporting the lowest forecast error based on a given loss function.The final column reports the RRMSE.
9,199.4
2024-05-21T00:00:00.000
[ "Computer Science" ]
Updates of the nuclear equation of state for core-collapse supernovae and neutron Stars: effects of 3-body forces, QCD, and magnetic fields We summarize several new developments in the nuclear equation of state for supernova simulations and neutron stars. We discuss an updated and improved Notre-Dame-Livermore Equation of State (NDL EoS) for use in supernovae simulations. This Eos contains many updates. Among them are the effects of 3- body nuclear forces at high densities and the possible transition to a QCD chiral and/or super-conducting color phase at densities. We also consider the neutron star equation of state and neutrino transport in the presence of strong magnetic fields. We study a new quantum hadrodynamic (QHD) equation of state for neutron stars (with and without hyperons) in the presence of strong magnetic fields. The parameters are constrained by deduced masses and radii. The calculated adiabatic index for these magnetized neutron stars exhibit rapid changes with density. This may provide a mechanism for star-quakes and flares in magnetars. We also investigate the strong magnetic field effects on the moments of inertia and spin down of neutron stars. The change of the moment of inertia associated with emitted magnetic flares is shown to match well with observed glitches in some magnetars. We also discuss a perturbative calculation of neutrino scattering and absorption in hot and dense hyperonic neutron-star matter in the presence of a strong magnetic field. The absorption cross-sections show a remarkable angular dependence in that the neutrino absorption strength is reduced in a direction parallel to the magnetic field and enhanced in the opposite direction. The pulsar kick velocities associated with this asymmetry comparable to observed pulsar velocities and may affect the early spin down rate of proto-neutron star magnetars with a toroidal field configuration. Introduction To describe the structure and hydrodynamics of compact matter; an equation of state (EoS) is needed to relate the physics of the state variables [1]. In supernovae the EoS determines the dynamics of the collapse and the outgoing shock, and determines whether the remnant ends up as a neutron star or a black hole. In a neutron star, it determines the mass-radius relationship, stellar composition, cool-down time and dynamics of neutron star spin down and mergers. In this paper we summarize some progress in the development of Equations of state for supernova and neutron star simulations. In particular we highlight the role of 3-body forces, QCD, magnetic fields and neutrino transport. Three body forces and the EoS At present, only a few hadronic EoSs are commonly employed that cover large enough ranges in density, temperature and electron fraction to be of use in core-collapse supernova simulations. The two most employed in astrophysical simulations are the EoS of Lattimer & Swesty (LS91) [2] and that of H. Shen et. al. (Shen98) [3,4]. The former utilizes a non relativistic parameterization of nuclear interactions in which nuclei are treated as a compressible liquid drop including surface effects. The latter is based upon a Relativistic Mean Field (RMF) model using the TM1 parameter set in which nuclei are calculated in a Thomas-Fermi approximation. Baryonic matter was parameterized with a new RMF model that treated nuclei and non-uniform matter with the statistical model of Hempel et. al. [5]. Here, we discuss a new Notre Dame-Livermore (NDL) EoS [6]. This EoS evolves from the original Livermore formulation [7,8], but unlike the previous version this NDL EoS, is consistent with known experimental nuclear matter constraints and recent [14] mass and radii measurements of neutron stars. Below nuclear matter density, the conditions for nuclear statistical equilibrium (NSE) are achieved at a temperature of T ≈ 0.5 MeV. Below this temperature the nuclear matter is a approximated by a nine element reaction network which must be evolved dynamically. Above this temperature, the nuclear constituents are represented by free nucleons, alphas and a single "representative" heavy nucleus. Among the new features in the NDL EoS is that high density phase of the EoS is treated with a parameterized Skyrme energy density functional that utilizes a modified zero range 3-body interaction. The effects of pions on the state variables at high densities is also included as well as the consequences of a phase transition to a QGP. Above nuclear matter saturation density we include both 2-body (v (2) ij ) and 3-body (v ijk ) interactions in the many-nucleon system. The Skyrme two-body potential is given in the standard form [9]. v (2) 12 Here [6] we consider the possibility that the Skyrme potential can be dominated by a 3-body repulsive interaction at high density. This term is taken to be a zero range force of the form v 123 = t 3 δ (r 1 − r 2 ) δ (r 2 − r 3 ). If the assumption is made that the neutron-star medium is spin-saturated [10], the three-body term becomes a density dependent two-body interaction [9] that we generalize to a modified Skyrme interaction that replaces the linear dependence on the density with a power-law index σ. A value of σ = 1/3 is a common choice [12,13]. However, in the present approach we treat σ as a free parameter constrained by the skewness coefficient and observed neutron-star properties [14]. All quantities and coefficients for symmetric nuclear matter are obtained from the usual relations. The pressure, is P = n 2 ∂ ∂n E A . The volume compressibility of symmetric nuclear matter is calculated from the derivative of the pressure: K = 9 ∂P ∂n = 18 P n + 9n 2 ∂ 2 ∂n 2 E A . The skewness coefficient, is from the third derivative of the free energy per nucleon Q 0 = Applying the saturation condition P (n = n 0 ) = n 2 ∂ ∂n E A | n=n 0 = 0, one obtains [6] a system of four equations in terms of t 0 , (3t 1 + 5t 2 ), t 3 , and σ. Solving this system for σ yields where the subscript zero denotes values at the saturation density. For our purposes, we adopt inferred values of n 0 , E 0 , K 0 , Q 0 from the literature and use these to determine the Skyrme model parameters. We also demand that these parameters allow neutron star masses ≥ 1.97 ± 0.04 M [14]. The saturation density n 0 ≈ 0.16 fm −3 and the binding energy per nucleon E 0 = −16 MeV are reasonably well established [15]. The determination of the compressibility parameter from experimental data on the giant monopole resonance on finite nuclei has been a long standing conundrum. For our purposes we adopt the median value and uncertainty from Ref. [16], i.e. K 0 = 240 ± 10 MeV as this is appropriate for the Skyrme force approach employed here. Solving the saturation conditions self consistently, we therefore determine the best range for the nuclear compressibility consistent with the results of [16]. There is even more uncertainty in the skewness parameter Q 0 . Breathing mode data [17] implies Q 0 = −700 ± 500 MeV. Using the range for K 0 given by [16] and solving the saturation conditions, we find [6] a skewness coefficient of Q 0 = −390 ± 90 MeV consistent within the range given in Ref. [16]. The fiducial NDL EoS is then constructed [6] using the The Skyrme The density dependence of the symmetry energy beyond saturation is highly uncertain. For many Skyrme models the symmetry energy either saturates at high densities, or in the worst case becomes negative. This results in a negative pressure deep inside the neutron star core. We implemented [6] a linearly increasing function of density. The symmetry energy at saturation was determined by the difference between the energy per particle for pure neutron matter and that of symmetric matter at T = 0 MeV. For all relevant parameter sets the NDL EoS symmetry energy at saturation is S 0 = 30.5 MeV [6]. QCD and the EoS For sufficiently high densities and/or temperature a transition from hadronic matter to quarkgluon plasma (QGP) can occur [18]. Progress [19] in lattice gauge theory (LGT) has shown that at high temperature and low density a deconfinement and chiral symmetry restoration occur simultaneously. In particular, it has been found [19] that the order parameters for deconfinement and chiral symmetry restoration changes abruptly for temperatures of T = 145−170 MeV [20,21] as a smooth crossover. At low density the hadron phase can be approximated as a pion-nucleon gas, while the QGP phase can be approximated in a bag model as a non-interacting relativistic gas of quarks and gluons [22]. The LGT results then imply a range for the QCD vacuum energy of 165 ≤ B 1/4 ≤ 225 MeV. Also requiring that the maximum mass of a neutron star exceed 1.97 ± 0.04 M [14] implies a value for B 1/4 near the top end of that range. For the description of quark matter we utilize a bag model with 2-loop corrections, and construct the EoS from a phase-space integral representation over scattering amplitudes. We allow for the possibility of a coexistence mixed phase in a 1st order transition or a simple cross over transition. It is convenient to compute the QGP in terms of the grand potential, Ω(T, V, µ), where the grand potential for the quark-gluon plasma takes the form: Where q 0 and g 0 denote the 0 th -order bag model thermodynamic potentials for quarks and gluons, respectively, and q 2 and g 2 denote the 2-loop corrections. In most calculations sufficient accuracy is obtained by using fixed current algebra masses (e.g. m u ∼ m d ∼ 0 GeV, m s ∼ 0.1 − 0.3 GeV). For this work we chose the strange quark mass to be m s = 150 MeV and a bag constant B 1/4 = 165 − 220 MeV. The quark contribution to the thermodynamic potential is given [18] in terms of a sum of the ideal gas contribution plus a two loop correction from phase-space integrals over Feynman amplitudes [23]. Fig. 1 compares [6] the neutron star mass radius relation for the NDL EoS for: 1) a hadronic EoS with 3-body forces (solid line); 2) a first order QCD transition with B 1/4 = 220 MeV; and 3) a simple QCD cross over transition. Also, shown for comparison are results from the LS180 EoS, Shen EoS and the original Bowers & Wilson EoS. Note, that all three versions of the NDL EoS easily accommodate a maximum neutron star mass ≥ 1.97 ± 0.04 M , however, the hadronic version must have the 3-body forces at high baryon density. A first order phase transition to a QGP is consistent with the high maximum neutron star mass constraint [14] for a bag constant B 1/4 > 220 MeV. This imposes a low baryon density transition temperature of T c = 158 MeV [22] which is consistent with the current range of crossover temperatures determined from LGT [19]. Magnetic Fields and The Eos We have also considered the neutron star equation of state and neutrino transport in the presence of strong magnetic fields [25]- [30]. Indeed, magnetic fields are everywhere in Nature and frequently play a role in astrophysical phenomena. In particular the existence of magnetars and magnetar flares [31,32,33], along with the observed asymmetry in supernova explosions and the observed pulsar kick velocities all suggest the strong magnetic fields play an important role in supernova explosions and the formation of proto-neutron stars [34,35,36]. In view of this we have undertaken studies of a variety of phenomena. In [25] we considered the role that strong interior magnetic fields (B ∼ 10 17 G) would have on neutron star structure and stability. We considered the nuclear equation of state for an ideal npe gas in a strong magnetic field. In particular, we calculated the proton concentration, the threshold densities for neutron, muon, and pion production and pion condensation in a strong magnetic field both without and with the effect of the nucleon anomalous magnetic moments. It was shown [25] that the higher Landau levels are significant at high density in spite of the existence of a very strong magnetic field. In particular, at high density, the proton concentration approaches the nonmagnetic limit. In particular, we have obtained the neutron appearance threshold density in a magnetic field when the nucleon anomalous magnetic moment is included. We also have shown [25] that the muon and pion threshold densities are not affected by magnetic fields for B < 10 17 G. We also obtained an equation of state for a pion condensate in strong magnetic fields. We found [25] that magnetic fields reduce the amount of pion condensation. However, we still could find distinguishable effects from a pion condensate in strongly magnetized neutron stars. In addition, we demonstrated an oscillatory behavior of the adiabatic index in both strongly magnetized n, p, e and n, p.e, µ, π gases at high density. Here we speculated that this behavior might lead to an interior pulsational instability. In [26] we investigated the possibility that soft gamma-ray repeaters (SGRs) and anomalous X-ray pulsars (AXPs) might be observational evidence for a magnetic phase separation in magnetars. We studied such magnetic domain formation as a new mechanism for SGRs and AXPs in which magnetar matter separates into phases containing different flux densities. We identified the parameter space in matter density and magnetic field strength at which there is an instability for magnetic domain formation. We showed that such instabilities will likely occur in the deep outer crust for the magnetic Baym, Pethick, and Sutherland (BPS) model and in the inner crust and core for magnetars described in relativistic Hartree theory. Moreover, we estimated that the energy released by the onset of this instability is comparable with the energy emitted by SGRs. In [27] this has recently been extend to a study a new quantum hadrodynamic (QHD) equation of state for neutron stars (with and without hyperons) in the presence of strong magnetic fields. The parameters were constrained by the condition that deduced neutron star masses and radii that must be consistent with the recent observations [14] of a high mass neutron star. The calculated adiabatic index for these magnetized neutron stars exhibited the same rapid changes with density. This was hypothesized to provide possible insight into the mechanism of starquakes and flares in magnetars. We also investigated the strong magnetic field effects on the moments of inertia of neutron stars. The change of the moments of inertia associated with emitted magnetic flares was shown to match well with observed glitches in some magnetars. In [29,30] we explored a perturbative calculation of neutrino scattering and absorption in hot and dense hyperonic neutron-star matter in the presence of a strong magnetic field. We found that the absorption cross-sections show a remarkable angular dependence in that the neutrino absorption strength is reduced in a direction parallel to the magnetic field and enhanced in the opposite direction. This asymmetry in the neutrino absorption can be as much as 2 % of the entire neutrino momentum for an large interior magnetic field. We estimate the associated pulsar kick velocities associated with this asymmetry in a fully relativistic mean-field theory formulation and show that the kick velocities are comparable to observed pulsar velocities. In [30] we have extended this calculation to include a toroidal magnetic field configuration. In this case, there can be an asymmetric emission of neutrino momentum along the magnetic field lines that are in the direction of the neutron star spin. This can substantially accelerate the spin down of a neutron star in the early cooling phase, ∼ 10 sec after core bounce. This is to be compared with [28] in which we considered the spin down of a neutron star purely from the outflow of neutrinos without a magnetic field.
3,700.2
2013-02-24T00:00:00.000
[ "Physics" ]
Dynamic Model of a Virtual Air Gap Reactor —Variable reactors have been a vital component of power networks for decades, where they have been used as fault-current limiting devices or for reactive power compensation. Traditionally, modifying the inductance of predominantly mechanically operated variable reactors requires seconds to minutes. In contrast, virtual air gap (VAG) reactors can change the inductance within milliseconds, potentially improving power system stability. Existing dynamic models of VAG reactors cannot capture the entire system dynamics, limiting their applicability for simulations in the time-domain. This research presents two dynamic VAG reactor models, one with and one without core losses. The models capture all significant system dynamics using electromagnetic principles and VAG reactor flux linkage behavior. The proposed models were experimentally validated using a small VAG reactor. Over a broad operating range, both models accurately reproduce the dynamic behavior, transient response, and dominant harmonics of the small VAG reactor. Consequently, the models may be used for a variety of applications, such as time-domain simulations, harmonic analysis, and the development of suitable controllers for VAG reactors. In addition, engineers may use the core loss omitting model as a VAG reactor design tool, as the actual reactor is not required for modeling. Dynamic Model of a Virtual Air Gap Reactor David Sevsek , Marko Hinkkanen , Fellow, IEEE, Jarno Kukkola , and Matti Lehtonen Abstract-Variable reactors have been a vital component of power networks for decades, where they have been used as faultcurrent limiting devices or for reactive power compensation.Traditionally, modifying the inductance of predominantly mechanically operated variable reactors requires seconds to minutes.In contrast, virtual air gap (VAG) reactors can change the inductance within milliseconds, potentially improving power system stability.Existing dynamic models of VAG reactors cannot capture the entire system dynamics, limiting their applicability for simulations in the time-domain.This research presents two dynamic VAG reactor models, one with and one without core losses.The models capture all significant system dynamics using electromagnetic principles and VAG reactor flux linkage behavior.The proposed models were experimentally validated using a small VAG reactor.Over a broad operating range, both models accurately reproduce the dynamic behavior, transient response, and dominant harmonics of the small VAG reactor.Consequently, the models may be used for a variety of applications, such as time-domain simulations, harmonic analysis, and the development of suitable controllers for VAG reactors.In addition, engineers may use the core loss omitting model as a VAG reactor design tool, as the actual reactor is not required for modeling. I. INTRODUCTION M ODERN power distribution systems must be reliable, resilient against disturbances, and deliver high-quality power.Variable reactors have traditionally attenuated power quality issues and network disturbances such as voltage deviations or harmonics.Furthermore, they have been used to limit fault currents.Using technologies such as on-load tap-changers or variable air gap reactors, traditional variable reactors have the ability to change the inductance [1], [2].Those systems have been valuable assets in the past.However, increasing power quality and safety requirements demand continuously adjustable reactors. One method which has seen an increasing interest in recent years to attain continuous adjustability is the utilization of saturable reactors (SR) [3], [4].SRs were described for the first time by Burgess in 1903 [5].SRs utilize a second DC winding to create a DC-biased flux in a magnetic core, thereby changing the magnetization of the SR.Hence, the DC current can regulate the reactor inductance smoothly and fast.The application of modern power electronics enables a precise DC current control improving inductance-changing speeds and accuracies of SRs.This enhancement can boost the power system stability.However, the lack of power electronics and the excessive costs limited the application of saturable reactors mainly to low-power applications in the last century. Recently, there has been renewed interest in a novel class of SRs that rely on the virtual air gap (VAG) concept [6], [7], [8].This novel class can be referred to as VAG reactors which became interesting due to the introduction of low-cost and high-power electronics and the commonly high durability of SRs.VAG reactors utilize pairs of secondary windings integrated into the magnetic core of the reactor.However, the winding direction of the secondary winding pairs is opposed.As a result, the DC currents flowing through the secondary windings create opposing DC fluxes.Hence, the secondary flux path closes locally, leading to a local saturation of the magnetic core.This local saturation phenomenon changes the magnetic reluctance of the core, influencing the primary winding flux linkage.Therefore, it can be concluded that the local core saturation via a DC current through immersed secondary control windings alters the inductance of the reactor.A straightforward real-time DC current controller that modifies the primary inductance of VAG reactors was presented in [7]. There have been several attempts to study VAG reactors [7], [8], [9], [10], [11], [12], [13], [14].For instance, in [12], [13], a finite element analysis has been performed to determine the equivalent length of VAGs.However, only a few analytical modeling approaches enabling a dynamic performance investigation of VAG reactors have been presented in previous research [7], [9], [10], [14].In [14], a design tool for VAG reactors was presented using reluctance networks.Similar to this, in [9], essential design features of VAG reactors, such as the core material and the dimensions of the VAG windows, have been investigated with the help of numerical magnetic field computations.Furthermore, in [10], a series of laboratory tests have been performed on a VAG reactor, demonstrating the voltage control capabilities of VAG reactors. The only known study that developed a time-domain model concentrating on the dynamic behavior of VAG reactors was presented in [7].This study presents a dynamic state-space model of a VAG reactor, validated with measurements of a low-voltage (LV) prototype.The time-domain modeling accuracy of the given model in [7] is acceptable.However, the model has a significant drawback.The state-space model fails to model the third harmonic.This weak spot comes from a simplification, assuming that the self-inductance of the primary winding depends solely on the DC control current, neglecting the influence of the primary current.This shortcoming reduces its usability when investigating the harmonic content of a VAG reactor and its inductance-changing capabilities. This paper is an extension of work initially presented in [15].The paper proposes two dynamic models for VAG reactors based on fundamental electromagnetic modeling principles [16], [17].Both models reproduce the dynamic behavior of a VAG reactor with excellent accuracy, including the dominating harmonics.In its basic form, the dynamic model ignores the core losses.However, it is also shown that the core losses can be incorporated by augmenting the basic dynamic model with a simple core loss resistance as an enhancement compared to the nonlinear core loss resistance model in [15].Finite element method (FEM) simulations of a VAG reactor produce current-flux-linkage mappings, which are the basis for the dynamic models.In addition, the accuracy of the proposed models is validated with a small VAG reactor that was not available in [15].In addition to the work presented in [15], this paper presents the model characterization process and validates both models, including harmonic and transient analysis. The proposed models are intended for use in time-domain simulations. A. VAG Reactor To verify the dynamic models described in this paper, a small VAG reactor was constructed and tested.Based on the assumption that the magnetic core would behave like grainoriented steel of type ET150-30, the reactor was designed as shown in Fig. 1(a).Fig. 1(b) depicts the primary and secondary windings and their corresponding connections.Fig. 1(c) depicts 1(a) and (c) is entirely due to the plastic cover shielding the core. B. Modeling Procedure A 3D FEM model was built and simulated in COMSOL based on the dimensions illustrated in Fig. 1.All windings consist of copper and have a cross-sectional area of 3.53 mm 2 . First, the magnetization characteristics were measured as described in Section II-B1.Alternatively, the magnetization characteristics could be obtained from the manufacturer of the core material.Afterward, the measured BH characteristics were extrapolated to ensure good modeling accuracy for overfluxed regions in the core caused by the secondary control windings.The extrapolation method is elaborated in Section II-B2. 1) Magnetization Characteristics: The magnetic properties of the core material that was or will be used to create a VAG reactor may often be found from its datasheet.If the magnetization characteristics are reliable, the approach described in this section may be redundant.In many situations, however, the steel production process and coil fabrication can significantly alter the material properties.After initial testing of the VAG reactor described in this article, it became clear that the same phenomena also occurred with the small VAG reactor.The mismatch between the desired and measured inductance was considerable.In such instances, the magnetization properties of the actual reactor must be measured. The magnetization characteristics of the core can be determined by exciting the primary winding with a sinusoidal voltage, and the secondary windings are utilized to measure its induced voltage, similar to the procedure presented in [18], [19].Therefore, the secondary windings have been reconnected so that the reactor works as a transformer.The magnetic flux density B in the core can then be estimated as where N 2 is the number of secondary windings, A c is the crosssectional area of the core, and u 2 is the induced voltage. The magnetic field strength H can be determined as where N 1 is the number of primary windings, i 1 is the measured current through the primary winding, and l is the mean path length of the core.In this case, the number of secondary windings is N 2 = 99.The effective cross-sectional area of the core is A c = 0.01 m 2 , and the mean path length of the core has been set to l = 1 m.The integration of the measured voltage typically leads to an integration error due to measurement inaccuracies.This integration error has been removed by subtracting the mean of the measured voltage from the measurements.The resulting hysteresis curve is depicted in Fig. 2, which also illustrates the selected initial magnetization curve.The selected magnetization curve is approximately the median curve dissecting the measured hysteresis curve. 2) Extrapolation of Magnetization Characteristics: Datasheets of electrical steels and BH curve measurement techniques like the one described before typically provide data for magnetic flux densities up to 1.9 T.However, depending on the design of the VAG reactor and the secondary control currents, overfluxed regions with flux densities above 1.9 T can occur. Hence, it may be necessary to extrapolate the BH characteristics up to the magnetic saturation in some cases.In this study, the extrapolation was done with the help of COMSOL's built-in BH curve checker application, which extrapolates input data (i.e., the measured BH curve) up to magnetic saturation utilizing the Simultaneous Exponential Extrapolation (SEE) method, presented in [20].The extrapolated BH curve of the studied VAG reactor created using the BH curve checker application can be seen in Fig. 3. 3) FEM Simulation: Based on the extrapolated magnetization characteristics and the dimensions of the VAG reactor, time-domain simulations in COMSOL were executed.Fig. 4 illustrates a case where the local saturation phenomena caused by the secondary DC current can be seen. The current-flux-linkage mappings containing the necessary data for the dynamic models presented in this study can be created by performing FEM time-domain simulations at different operating points of the VAG reactor.Different operating points mean performing simulations with different secondary currents. However, when performing the simulations, it is essential to remember that the rated operating flux density, the maximum flux linkage, and the primary voltage level are directly related.Therefore, performing all simulations at the rated primary voltage is essential.As a result, the current-flux-linkage mappings contain data at the rated operating flux density of the VAG reactor.For example, the flux linkage characteristics of the simulated VAG reactor in this study are shown in Fig. 5. A. Dynamic Model Generally, the voltage applied to a winding must balance the voltage drop in the winding resistance and the induced voltage.VAG reactors consist of primary and pairs of series-connected secondary windings.Hence, a dynamic model can be defined as where ψ 1 and ψ 2 represent the primary and secondary flux linkages, respectively.Furthermore, R 1 and R 2 depict the primary and secondary winding resistances, and u 1 and u 2 are the voltages over the primary and secondary windings, respectively.The primary i 1 and secondary i 2 currents are interconnected through the flux linkages as On the basis of ( 3) and ( 4), a dynamic model of a VAG reactor can be constructed, excluding the core and other loss components, such as eddy-current-induced losses in the tank walls of oil-immersed VAG reactors.Fig. 6 illustrates the primary side of this dynamic model.The information about the flux linkages in (4) can also be understood as a set of nonlinear primary and secondary inductances. B. Augmented Dynamic Model The core losses can be considered by adding a constant parallel resistance to the nonlinear inductances.The augmentation of this parallel circuit with a series inductance simplifies the implementation and ensures a stable simulation of the augmented dynamic model.An equivalent circuit of the primary side of the augmented model can be seen in Fig. 7, and the schematic representation of the complete augmented model is shown in Fig. 8. The dynamic models are similar to those of induction machines.However, in contrast to induction machine models, where all values are usually assumed to be constants, dynamic VAG reactor models include nonlinear inductances that are specified by knowledge about the flux linkages in the reactor as in (4).Furthermore, both dynamic models can simulate comparable devices with two pairs of windings in this configuration.However, the model might be expanded to include more windings.Nevertheless, this would enhance the modeling complexity and is outside the scope of this study. A. Series Resistance The series resistances of the primary R 1 and secondary R 2 windings are determined by injecting a constant DC current into the windings.The injected currents cause a voltage drop across the respective windings.After measuring the voltage drop, the winding resistance can be calculated using Ohm's law.This operation must be carried out on the real VAG reactor.However, if the dynamic model is to be utilized for design reasons (i.e. the VAG reactor has not yet been built), normal FEM software can be used to obtain resistance values. B. Series Inductance Depending on the design and ratings of the VAG reactor, the series inductances could be used as leakage inductances to model Fig. 9. Exemplary primary u 1,meas and secondary u 2,meas voltage measurement results from the VAG reactor, which can be utilized as an input file for a time-domain simulation of the dynamic models.stray fluxes.At higher currents, the magnetic field strength increases, and stray fluxes can induce significant eddy-current losses, for example, in tank walls of oil-immersed reactors.In this case, the current-flux-linkage mappings (see Section IV-D) could be divided into a magnetizing and a stray flux linkage portion, potentially leading to an increased modeling accuracy. This paper avoids the division of flux linkages into magnetizing and leakage flux due to the missing tank wall in the studied VAG reactor and the low rated currents of the reactor.Hence, the series inductance solely serves as an insignificantly small parasitic inductance that enables the addition of the core resistance without creating direct feedthrough in the model.Hence, no parameterization is required on the reactor or the FEM program. C. Core Resistance In order to determine the constant core-loss resistances Rc, 1 and Rc, 2, an experiment must be conducted on the actual VAG reactor.Section V-A describes the experimental setup used to acquire measurement data for the parameterization procedure.The core-loss resistances of the VAG reactor can be determined with a single experiment.Therefore, any primary voltage level will be applied to the primary winding, and a power source will supply the secondary winding.In this study, the secondary winding was powered by a constant-current-controlling power source.The main objective of the experiment is to measure the primary u1, meas and secondary u2, meas voltages.The measured voltages serve as voltage input files for the augmented model.Fig. 9 illustrates the input file utilized in this study.This file enables the simulation of the augmented model with measured voltages in the time-domain.The simulation provides sets of primary and secondary current files, which are compared to the actual currents observed on the VAG reactor. The goal is to select the core resistances to minimize the magnitude of the error between the simulated and measured currents.Thus, an objective function was defined as the sum of where N represents the total number of measurement points.y i,1 and y i,2 stand for the ith measured primary and secondary current, while y i,1 and y i,2 are the respective simulated values.max(y 1 ) and max(y 2 ) are the measured maximum primary and secondary currents in the measurement interval utilized to normalize the error values.Determining suitable values for the core resistances is an optimization task that can be challenging due to the potentially nonlinear and nonconvex nature of the problem.However, the objective function can be minimized by utilizing a genetic algorithm (GA).Therefore, the augmented model was constructed in MATLAB/Simulink, and the optimization was conducted utilizing the GA from MATLAB's global optimization toolbox.Table II illustrates the model parameters of both dynamic models, which were determined following the previously described methods. D. Flux Linkage Characteristics The essential data for both dynamic models, having the most significant influence on the modeling accuracy, is the information about the flux linkage characteristics of the VAG reactor to be studied.As defined in (4), the currents are functions of the respective flux linkages.The FEM simulations provide flux linkage characteristics are functions of the respective currents as A dataset for (4) can be attained by inverting those results.For instance, functions such as MATLAB's scatteredInterpolant, or griddedInterpolant create interpolants that act as look-up tables.Those interpolants perform linear interpolation on the datasets to provide the necessary information for the dynamic models. V. RESULTS Both dynamic models were experimentally validated with the help of the small VAG reactor.Therefore, time-domain MAT-LAB/Simulink models matching the models shown in Fig. 6 and Fig. 8 were created.As inputs for the Simulink models, experimentally recorded primary and secondary voltages from the small VAG reactor, comparable to those seen in Fig. 9, were utilized in the simulations.The model outputs, simulated primary and secondary currents, were then compared to the observed primary and secondary currents from the VAG reactor. A. Experimental Setup The VAG reactor was tested with two different experimental setups.Most of the tests were performed by directly connecting the primary side of the VAG reactor to a Teklab SM1603 isolation transformer.Thus, the VAG reactor could be tested at different primary voltage levels.Fig. 10 illustrates the test network used in the second step.The VAG reactor was functioning as an arc suppression coil (ASC) in this second step.That means the VAG reactor was connected to the neutral of the secondary side of the isolation transformer.The network resembles a single feeder in a distribution network with two separate line sections.The parameters of the network components are listed in Table III.The secondary side of the VAG reactor has been controlled with an EA-PSI 91500-30 laboratory power supply in constant current control mode.Different operating points of the VAG reactor have been investigated by selecting different DC secondary currents. A LeCroy HVD3206 A, high voltage differential probe, was used to measure the primary voltage.The primary current was measured utilizing a LeCroy CP150 current probe.The secondary voltage was measured with a Testec TT-SI 9010 A differential probe, and the secondary current was measured utilizing a LeCroy AP015 current probe.A digital oscilloscope (LeCroy HDO6054) visualized and stored all measurements with a sampling frequency of 10 kHz for all measurements (i.e., voltages and currents). B. Time-Domain Analysis The time-domain simulation and measurement results were compared as a first model verification step.The high modeling accuracy of both dynamic models (i.e., dynamic and augmented dynamic) can be seen by comparing the simulated with the measured primary i 1 and secondary i 2 currents.Fig. 11 illustrates the behavior of the simulated models and the VAG reactor at a primary voltage of 150 V and an RMS secondary current of approximately 5 A. It can be seen that both models perform equally well.Furthermore, both models match the measured currents almost perfectly.The minor difference between the simulated and the measured values could be caused due to measurement inaccuracies (i.e., the accuracy of current and differential probes). Even though the power supply operates in the constant current control mode, a double-frequency component may be seen in the control current, as illustrated in Fig. 11.This double-frequency component is the result of the interaction between the AC flux and the flux created by the secondary windings.As the AC flux increases, it will partially reduce the flux density in the vicinity of the secondary windings because it opposes a portion of the flux produced by the secondary windings.When the AC flux again decreases, the flux density in this region returns to its initial value.This occurrence takes place when the AC flux density reaches the positive and negative peaks.Consequently, the secondary flux linkage experiences a double-frequency component, yielding a double-frequency AC component in the secondary current.The magnitude of this AC component grows as the voltage across the primary side of the reactor rises, resulting in greater AC flux densities and more considerable variations in the local flux density around the secondary winding. In addition to investigating the dynamic modeling accuracy when the RMS primary voltage and secondary currents are constant, it is essential to evaluate the modeling accuracy of both models during transient occurrences.The VAG reactor has thus been deployed as an ASC as part of the small test network illustrated in Fig. 10.By closing switch K1 in the test network, a resistance Rf representing a fault is connected to the test network.As demonstrated in Fig. 12, the fault causes the voltage across the primary winding (i.e., the neutral voltage of the network) and the current of the VAG reactor to increase after approximately 0.1 seconds suddenly.Furthermore, it can be seen that there are minor differences between the simulated and measured currents.Both models, however, exhibit the same transient behavior as the tested VAG reactor.The modeling accuracy difference between the dynamic model and the augmented dynamic model is negligible.Therefore, it may be stated that both dynamic models accurately reproduce the transient behavior of the VAG reactor. Furthermore, because the augmentation of the dynamic model does not significantly increase the time-domain modeling accuracy, it can be avoided if the primary objective of the model is to resemble the inductive current production of a VAG reactor.VAG reactors are primarily inductors with typically small power factors.Hence, modeling core losses can be neglected in most cases.If that is the case, utilizing the dynamic model as a design tool for VAG reactors is possible.The non-augmented dynamic model does not necessarily require any characterization procedure performed on an actual VAG reactor.Instead, the series resistances can be approximated, and the flux linkage characteristics can be obtained from FEM simulations in case the magnetization behavior (i.e., BH curve) can be obtained from the steel manufacturer.That allows engineers to investigate a potential VAG reactor design without building it.However, this simplification is not necessarily valid for VAG reactors with higher shares of resistive current production, as demonstrated in [15].If the simple core loss model in this study should not be sufficient, then it is possible to utilize a nonlinear core loss model which considers hysteresis and eddy current losses [21].The implementation of this nonlinear model and its integration into the dynamic model has been previously described in [15]. C. Harmonic Analysis The goals of the dynamic models include utilizing them for harmonic analysis and developing a controller that could mitigate the harmonics created by VAG reactors.Hence, the harmonic content in the primary current of the simulated dynamic model must resemble the actual VAG reactor closely.For instance, Fig. 13 illustrates the harmonic content in the primary current of the VAG reactor as a function of the secondary current at 230 V primary voltage.It can be seen that the harmonic content produced by both models agrees very well with the harmonic current production in the tested VAG reactor. D. Steady-State Analysis The steady-state behavior of both dynamic models was compared to the measurement results of the actual VAG reactor as a last model verification step.Fig. 14 illustrates the difference between the simulated and the actual active and reactive power production of the primary side of the VAG reactor.It can be seen that the augmented model reduces the active power modeling error in contrast to the dynamic model.However, the reactive power production of both models is the same.In addition, Fig. 14(b) shows that both dynamic models model the reactive power output of the VAG reactor well.However, the reactive power production of the actual VAG reactor differs slightly from the modeled ones at a primary voltage level of 230 V.The discrepancy between the modeled and the actual reactive power production at 230 V can be explained by the inaccuracy of the magnetization characteristic measurements described in Section II-B1.The non-uniform shape of the reactor core increases the likelihood that the mean path length of the core differs from the chosen one.This has an impact on the calculated magnetic field strength.As a result, the hysteresis and magnetization curve, shown in Fig. 2, would change.Since inductance and inductive current production are directly linked to the magnetization curve, this is likely the source of the deviation between the measured and the simulated values.This deviation could be removed by either making a more accurate measurement of the magnetization curve or obtaining magnetization curve data from the steel manufacturer. The primary inductance values of the VAG reactor, in Fig. 15, were calculated as where ω = 2π • 50 rad/s is the fundamental angular frequency of the primary voltage, U 1 and I 1,ind are the primary RMS voltage and the RMS inductive primary current, respectively.Fig. 15 shows that both dynamic models model the inductance of the VAG reactor well at 50 and 150 V.The deviation of the magnetization curve can also explain the slight difference between measurements and simulation models at 230 V. It can be concluded that the results of both dynamic models, shown in Figs. 14 and 15, agree very well with the actual behavior of the VAG reactor over the whole range of secondary RMS control currents. VI. CONCLUSION Two dynamic models of a VAG reactor, including the essential system dynamics, have been developed.The model characterization approach for the proposed dynamic models has been explained in full, allowing the technique to be easily replicated for other VAG reactors.In addition, test results from a small VAG reactor have been compared to the developed models.This analysis revealed that both models accurately capture the dynamic response (time-domain and steady-state) of a VAG reactor across its entire operating range.These results indicate that both models may be utilized in time-domain simulations, such as simulations of power systems.Additionally, engineers may use the dynamic model to design VAG reactors.Future research should focus on developing a real-time controller for VAG reactors, which is facilitated by the models. Fig. 2 . Fig. 2. Measured hysteresis curve of the VAG reactor and the derived magnetization curve. Fig. 3 . Fig. 3. Measured and extrapolated magnetization characteristics of the VAG reactor core. Fig. 4 . Fig. 4. Magnetic flux density in the core when the instantaneous primary voltage is at its peak and a high DC current flows through the secondary windings (FEM model). Fig. 7 . Fig.7.Equivalent circuit of the nonlinear VAG reactor inductance and the parallel core-loss resistance augmented with a series inductance (primary side). Fig. 8 . Fig. 8. Schematic representation of the augmented model incorporating a constant core-loss resistance. Fig. 11 . Fig. 11.Dynamic response of the VAG reactor and the dynamic models at a primary voltage level of 150 V (RMS) and a secondary current of 5 A (RMS). Fig. 12 . Fig. 12. Transient behavior of the VAG reactor.(a) Primary and secondary voltage measurements.(b) Measured and simulated primary and secondary currents. Fig. 13 . Fig. 13.Magnitude of different harmonic components in the primary current of the VAG reactor as a function of the secondary current.Simulated dynamic model (solid line), simulated augmented dynamic model (dashed line), and measured (dotted) results at rated nominal primary voltage (230 V). Fig. 14 .Fig. 15 . Fig. 14.(a) Active and (b) reactive power production of the VAG reactor as a function of the secondary current.Simulated dynamic model (solid line), simulated augmented dynamic model (dashed line), and measured (dotted) results at different primary voltage levels.
6,599.6
2023-08-01T00:00:00.000
[ "Engineering", "Physics" ]
On-chip self-referencing using integrated lithium niobate waveguides The measurement and stabilization of the carrier-envelope offset frequency $f_{\textrm{CEO}}$ via self-referencing is paramount for optical frequency comb generation which has revolutionized precision frequency metrology, spectroscopy, and optical clocks. Over the past decade, the development of chip-scale platforms has enabled compact integrated waveguides for supercontinuum generation. However, there is a critical need for an on-chip self-referencing system that is adaptive to different pump wavelengths, requires low pulse energy, and does not require complicated processing. Here, we demonstrate efficient carrier-envelope offset frequency $f_{\textrm{CEO}}$ stabilization of a modelocked laser with only 107 pJ of pulse energy via self-referencing in an integrated lithium niobate waveguide. We realize an $f$-$2f$ interferometer through second-harmonic generation and subsequent supercontinuum generation in a single dispersion-engineered waveguide with a stabilization performance equivalent to a conventional off-chip module. The $f_{\textrm{CEO}}$ beatnote is measured over a pump wavelength range of 70 nm. We theoretically investigate our system using a single nonlinear envelope equation with contributions from both second- and third-order nonlinearities. Our modeling reveals rich ultrabroadband nonlinear dynamics and confirms that the initial second harmonic generation followed by supercontinuum generation with the remaining pump is responsible for the generation of a strong $f_{\textrm{CEO}}$ signal as compared to a traditional $f$-$2f$ interferometer. Our technology provides a highly-simplified system that is robust, low cost, and adaptable for precision metrology for use outside a research laboratory. Introduction The development of optical frequency combs has enabled high-precision frequency measurements and led to advances in a wide area of research including all-optical clocks, spectroscopy, and metrology [1][2][3]. Significant advances in nanofabrication technology over the past decade have led to the development of various chip-based platforms for frequency comb generation, including silicon nitride, silicon dioxide, silicon, and aluminum nitride [4][5][6][7][8][9][10][11][12][13][14][15][16][17][18][19][20][21][22][23]. Over the past two decades, two different approaches have been developed for on-chip frequency comb generation. One approach is based on stabilization of the repetition rate and carrier-envelope offset frequency (f CEO ) of a modelocked laser. The f CEO can be detected using a self-referenced f -2f interferometer, which requires a phase coherent octave-spanning spectrum [ Fig. 1(a)] [2,[24][25][26]. This broadband spectrum * Corresponding author<EMAIL_ADDRESS>is achieved through supercontinuum generation (SCG) in a nonlinear waveguide. The second approach involves Kerr comb generation (KCG), where a single-frequency, continuous wave laser is used to pump a high-Q microresonator to excite a broadband, dissipative Kerr soliton through parametric four-wave mixing [4]. While the nonlinear broadening stage has been implemented onchip, f -2f interferometry has been largely performed using bulk optics and a periodically poled χ (2) crystal or waveguide for second harmonic generation (SHG) [11,12,15,25,26]. Since this process occurs after spectral broadening though SCG or KCG, the spectral components used for harmonic generation are at the wings of the generated spectrum, limiting the available peak power and resulting in low power conversion efficiency of SHG. This issue is particularly severe in KCG [21][22][23], where auxiliary lasers locked to the Kerr comb are frequency doubled or tripled for f -2f or 2f -3f interferometry. In addition, a variable delay line needs to be implemented in such a system to compensate for the arXiv:2003.11599v1 [physics.optics] 25 Mar 2020 dispersive walk-off between the f -and 2f -components. Furthermore, for efficient phase matching at different wavelengths, devices with different poling periods are needed and precise temperature control is required. As an alternative, here we consider a scheme where high peak power pump pulses first generate a harmonic signal before the remaining pump is used for SCG to create the fundamental frequency component [ Fig. 1(b),(c)]. The high peak power allows for highly efficient χ (2) -based harmonic generation, while providing sufficient excess pump power to allow for spectral broadening through the χ (3) nonlinear process. Recently, there have been demonstrations of on-chip f -2f interferometry through simultaneous SCG and secondharmonic generation (SHG) [14,16,19], and f CEO stabilization has been demonstrated in silicon nitride (SiN) waveguides using a photo-induced nonlinear grating effect (χ (2) =0.5 pm/V) [16] and aluminum nitride waveguides (χ (2) =1 pm/V) [19]. While such an approach offers the potential for a high level of simplicity to produce a self-referenced frequency comb, SiN waveguides require an optical-writing process, which involves a femtosecond laser to generate the effective χ (2) nonlinearity and sets a limit on the input pulse energy that can be used for f -2f interferometry, and AlN waveguides demands nanojoule pulse energies which is considerably higher than what has been achieved with separate SCG and SHG systems [11,12,15]. In recent years, integrated lithium niobate (LN, LiNbO 3 ) has emerged as an ideal platform for nonlinear photonics and its large nonlinear index (n 2 = 2 × 10 −19 m 2 /W) and strong χ (2) nonlinearity (χ (2) = 40 pm/V) [27][28][29][30][31]. Moreover, advances in waveguide fabrication technology [32] has led to the realization of low-loss waveguides with tight optical confinement, enabling dispersion engineering which is critical for nonlinear photonics applications. Previously, Yu, et al. [29] has shown the first evidence of f CEO detection using octavespanning SCG in a LN waveguide. Alternatively, SCG has been demonstrated in a periodically poled integrated LN waveguide via cascaded nonlinearities using a 2-µm pump [33]. However this system produces a weak f CEO beatnote due to the low pulse energy, and requires further complexity in design in terms of both dispersion engineering and group velocity matching and fabrication for poling. In this paper, we demonstrate highly efficient selfreferencing in an integrated LN waveguide by leveraging the large intrinsic χ (2) and χ (3) nonlinearities. Selfreferencing is achieved by performing both SHG and SCG for f -2f interferometry in a single waveguide. We use this LN f -2f interferometer to demonstrate f CEO stabilization of a modelocked fiber laser with record low pulse energies of 107 pJ, with a large reduction in phase noise >100 dB/Hz at 10 Hz. We verify that the stabilization performance is equivalent to a conventional f -2f module. In addition, we demonstrate f CEO beatnote detection over 70 nm of pump wavelength tuning. We also numerically model the pulse propagation by employing a single nonlinear envelope equation that incorporates both second-and third order nonlinearities. Our modeling unveils the fascinating underlying dynamics of simultaneous harmonic generation and SCG which manifests in our system and correctly reproduces the experimentally observed spectrum over the vast optical bandwidth spanning multiple octaves. Our demonstration illustrates the technological readiness of LN waveguides for implementation of a low cost and adaptable precision metrology system for use outside a research laboratory. Theory Most of the prior work done on pulse propagation dynamics with χ (2) effects has implemented coupled equations for the fundamental and second-harmonic fields [33][34][35]. However, this analysis breaks down for ultrabroadband χ (2) and χ (3) interactions where these fields spectrally overlap. In order to model ultrabroadband nonlinear phenomena in LN waveguides, where the combined χ (2) and χ (3) effects result in multi-octave bandwidth generation, we consider a single nonlinear envelope equation taking into account χ (2) and χ (3) effects [36][37][38][39][40][41]. We solve the nonlinear envelope equation, where P N L = 0 χ (2) E 2 + χ (3) E 3 is the total nonlinear polarization with contributions only from nonnegative frequencies, τ sh = 1/ω 0 − ∂[ln(n(ω))]/∂ω| ω=ω0 is the optical shock time, β n is the n-th order dispersion coefficient, α is the propagation loss, ω 0 is the pump frequency, and τ = t − β 1 z is the local time in the moving frame. We incorporate the effects of second-and third-order nonlinearities, high-order dispersion, and self-steepening. We solve Eq. (1) numerically via the split-step Fourier method using the fourthorder Runge-Kutta for the nonlinear step. Figure 2(a) shows the temporal and spectral evolution of the pulse in a 0.5-cm-long LN waveguide with a cross section of 800×1250 nm. The pump pulse is 90-fs in duration with a pulse energy of 107 pJ and is centered at 1560 nm. In the spectral domain, we immediately see the effects of SHG and sum-frequency generation at 780 nm, along with third-harmonic generation at 520 nm. As the pulse propagates in the waveguide, we observe spectral broadening due to self-phase modulation. For z > 4 cm, we observe dispersive wave (DW) formation [42][43][44][45] originating near 860 nm that subsequently blue shifts due to phase matching and approaches the second harmonic wavelength. In addition, we observe the formation of the second-harmonic of the DW. Figure 2(b) shows the simulated group-velocity dispersion (GVD) and the dispersion operatorD = n≥2 βn(ω0) n! (ω − ω 0 ) n for a 1560 nm pump (ω 0 corresponds to the center frequency of the pump) [43][44][45], and Fig. 2(b) shows the simulated spectrum at the waveguide output. The spectral position of the DW is predicted from the zero-crossing of the dispersion operator. The spectral overlap between the DW and the second harmonic component allows for effective mixing between the f and 2f components and results in a strong f CEO beatnote. Figure 2(d) shows the spectrogram at the output. We calculate a group-velocity mismatch of 130 fs/mm between the pump and the second harmonic component which is significantly lower than that of bulk LN (300 fs/mm) [28,46]. This low temporal walk-off eliminates the need for the implementation of a delay line and enables the single waveguide device for f -2f interferometry. In our experiment, we send a pulse train from a modelocked erbium fiber laser centered at 1560 nm with a pulse duration of 90 fs and a 250-MHz repetition rate into a 0.5-cm-long air-clad LN waveguide. We pump the fundamental TE mode of the waveguide which allows us to exploit the largest nonlinear tensor component for the χ (2) process in the x-cut film. The LN waveguide is fabricated using an x-cut 800-nm LN thin film with an etch depth of 450 nm and a width of 1250 nm. The waveguide output is collected using a lensed fiber sent to two different optical spectrum analyzers for spectral characterization. We estimate an input coupling loss of 10.3 dB and an overall insertion loss of 17.5 dB. This coupling loss can be reduced to 1.7 dB, and the overall losses can be as low as 3.4 dB [47]. Figure 3 shows the spectral evolution as the pulse energy in the waveguide is increased. For 20 pJ of pulse energy, we observe a strong SHG signal peaked at 760 nm and a weak fourthharmonic signal at 380 nm. As the pulse energy is increased, we observe the formation of a DW centered at 840 nm. At 107 pJ, we observe a blue-shift of the DW due to phase matching that results in overlap with the SHG signal, enabling the generation of a strong f CEO beatnote. Similar to our modeling, we observe the second harmonic of the DW near 400 nm. Figure 4(a) shows the entire supercontinuum spectrum which continuously spans 700 -2200 nm for a coupled pulse energy of 107 pJ. The f CEO of the modelocked laser is measured by directly detecting the waveguide output using a silicon avalanche photodiode (APD, 400 -1000 nm wavelength range). For f CEO stabilization, the measured offset from the APD is phase locked to a 10-MHz rubidium frequency standard using a feedback loop, which includes a phase detector and a PID controller. Figure 4(b) shows both the measured free-running (red) and locked (blue) in-loop f CEO beatnote centered at 20 MHz, obtained with a 10-Hz resolution bandwidth (RBW) using a phase noise analyzer. Figure 4(c) shows the locked (f CEO ) beatnote over a 50-Hz span with 1-Hz RBW. We measure a 3-dB bandwidth of 1 Hz which is at the resolution limit of the analyzer. For comparison, we measure the out-of-loop f CEO beat using a standard f -2f interferometer based on a highly nonlinear fiber and a bulk periodically poled lithium niobate frequency doubler, and we observe a nearly identical signature [ Fig. 4(c)]. Figure 5 shows the single sideband phase noise of the f CEO beatnote for the free-running (red) and locked (blue) cases. We achieve a tight phase lock and observe a large reduction in phase noise (>100 dB/Hz at 10 Hz). Lastly, we investigate the operational range of the pump wavelength for generating the f CEO beatnote. For this measurement, we use 200-fs pulses from a tunable femtosecond optical parametric oscillator (OPO) with a repetition rate of 80 MHz. Figure 6(a) show the measured optical spectra with the corresponding RF spectra. The peak at 80 MHz corresponds to the repetition rate and the two next highest peaks correspond to f CEO1 and f CEO2 . The pump wavelength is tuned from 1470 nm to 1530 nm, the upper wavelength limited by the operating range of the OPO. We achieved a f CEO signal with >20 dB signal-to-noise ratio (SNR) with a modelocked pulse source from 1490 nm to 1530 nm. As the pump wavelength is increased, we see the f CEO beatnote become stronger with a SNR as high as 40-dB for a pump wavelength of 1530 nm. Remarkably, the f CEO beatnotes are bright featuring a high intensity of -8.26 dBm at the same level as the repetition frequency, thanks to the spectral brightness of both DW and SHG components and their relatively good spectral overlap. Since the SHG signal strength largely remains the same, the increase in SNR as the pump is red-shifted is attributed to the blue-shift of the DW towards the second-harmonic position. For this GVD profile (Fig. 2), the spectral position of the dispersive wave ω DW is largely dictated by GVD and third-order dispersion through the relation ω DW = −3β 2 /β 3 [45]. As the pump is red-shifted, the SHG also red shifts while the DW blue shifts due to a increased β 2 . In our waveguide, better spectral overlap between SHG and DW is achieved as the pump wavelength is increased, and an f CEO signal >20 dB is achieved from 1490 nm to 1560nm. The upper limit is dictated by the tuning range of our pulse source. As we can see from Fig. 3 pumping at 1560 nm, the DW has not yet reached the best overlap with SHG. Figure 6(b) shows a plot of the peak wavelength of the DW (red) and the second harmonic of the pump wavelength (blue) for a range of pump wavelengths. Based on our fit, we expect the best overlap to occur at 1587 nm, which corresponds to the crossing point between the DW and second-harmonic curves, and we extrapolate that the f CEO detection range is nearly symmetric about this crossing point up to 1700 nm. In conclusion, we demonstrate on-chip self-referencing using a single integrated LN waveguide. We achieve efficient f CEO stabilization of a modelocked fiber laser using 107 pJ of pulse energy by exploiting the efficient second-harmonic process that occurs at the beginning of the waveguide while still allowing for strong χ (3) interactions with high peak pump power. The platform offers a wide pump wavelength range >70 nm over which the f CEO beatnote can be generated. The simple structure can replace a conventional f -2f interferometer with bulk PPLN which requires various poling periods, a temperature controller, and a delay line for extracting the f CEO . In addition, we theoretically investigate this system by modeling pulse propagation in a LN waveguide with χ (2) and χ (3) effects. The low power consumption and compact footprint of our scheme offers promise towards the miniaturization of frequency comb technology and a step towards the realization of an integrated fully-stabilized frequency comb source for applications beyond the lab. Acknowledgments Device fabrication is performed at the Harvard University Center for Nanoscale Systems (CNS), a member of the National Nanotechnology Coordinated Infrastructure Network (NNCI), which is supported by the National Science Foundation under NSF ECCS award no.1541959. The authors thank J. K. Jang and Y. Zhao for useful discussions. Disclosures Disclosures. The authors declare no conflicts of interest. The plot shows the optical spectra (left) and corresponding RF measurement (right) for four different pump wavelengths from 1470 nm to 1530 nm. The peak at 80 MHz corresponds to the repetition rate and the two highest peaks below that corresponds to f CEO1 and f CEO2. We observe a 40 dB increase in the signal-to-noise ratio of the f CEO beatnote as the pump wavelength is increased which is attributed to the increased spectral overlap between the DW and the second harmonic component. The difference in the f CEO arises from the drift of our pump source. The RBW of the RF spectrum analyzer is 300 kHz. (b) Center wavelength of DW peak (red) and second harmonic of the pump wavelength (blue) for a range of pump wavelengths. Red circles denote the experimentally measured points and the solid red line is a fit based on the points. The shaded region shows the spectra range of f CEO detection. We expect the range extends nearly symmetrically on the other side of the crossing point between DW and SHG.
3,976
2020-03-25T00:00:00.000
[ "Physics", "Engineering" ]
Chile: Improving Access and Quality to Stop Social Unrest Chile has faced significant student unrest, and accompanying political instability. The causes of this activism were mainly increased tuition in higher education that were imposed without any improvement in academic standards. Now, Chileans want to stop civil unrest, to avoid a negative impact on its remarkable gross domestic product per capita growth rate (4% per year in 2000-2011) and on the ongoing progress to the reduction of poverty (from 38% in 1990 to 15% in 2009). At the beginning of 2012, polls showed a majority supporting the design of strategies to reduce social inequality and gaps in education. Fortunately, helping students to read one or two pages in their leisure time, in order to be prepared for active engagement in class, has reduced learning gaps and increased promotion rates in pilot trials. If results are confirmed in a next large-scale trial, this strategy could help in restraining further demonstrations and provide a model for a number of Latin American countries facing similar problems. STUDENTS' DEMONSTRATIONS FROM 2011 TO THE PRESENT In May 2011, Chilean university students took to the streets to demand reform of the education system. They asked for a fair student-loan scheme and access to quality education for everyone. When the school year ended in December, there was no sign of settling the most serious confrontation with students over the past two decades in Latin America. The top 40 percent of each age-group cohort now has access to higher education. Even though this is an impressive achievement, most of these students belong to the upper half of the socioeconomic distribution (households having an average income over US$20,000). However, two-thirds of these families have difficulties financing the annual cost of higher education (ranging from US$5,000 to 10,000, per student). Financing education is especially difficult for middle social class families with more than one child, because they do not have access to affordable student loans. Money is needed to pay for further education after high school, but previous knowledge and skills to learn new knowledge are also required to be admitted to higher education, in order for students to stay enrolled and to graduate. Being a good student in a public high school does not guarantee access to higher education. As an example, the valedictorian of a marginal urban public high school, with a high school grade average of 95 percent, only achieved 423 points in the 2011 University Selection Test-below the minimum of 450 points required to enroll at a university. Graduates from public high schools often do not have the capacity to learn university-level material. They have not reached the necessary level of intellectual development, and remedial courses cannot close this gap. These students require more individualized teaching; but this teaching cannot be provided, given the large size of classes and the lack of faculty experience with cooperative and interactive pedagogy. Therefore, only one of three admitted students eventually graduate in Chile, whereas the comparable ratio is 8:1 for Argentina and 2:1 for Colombia. CLOSING THE GAP The need for remedial courses in college is not unusual, but in the United States students can take remedial courses that do not count toward a degree-just delay the time to degree. A recent report found that only one-third of US students leave high school academically prepared for college (one-sixth of Hispanic students). Some studies state that as many as 40 percent of college students will take at least one remedial course. However, in Latin America and other developing countries, university study involves the pursuit of professional degrees-such as in law, medicine, architecture, or engineering-without room in the schedule for general study or remedial work. Given that all students follow the same rigid degree program, remedial courses do not fit into schedule unless the whole first semester is allocated to them. Fortunately, systematic help has been effective for students to gain preparation for increased engagement in each class. This is the objective of the innovation now being introduced at the first semester of Universidad Autonoma de Chile. The essential components are: (1) a clear outline and summary of topics to be covered in each class, distributed during (or before) the first class session; (2) specific text, assigned for each class (starting with less than 1,000 words in the first semester, given that students are not used to extensive reading assignments), covering the basic knowledge (definitions, concepts, or basic data) in advance in order to derive maximum benefit from the class; (3) start each class with an oral factual (literal) question to one student (selected at random) and assign a mark for the response to the question (as a sort of scaffolding to create the habit of reading in advance); (4) request students (immediately after the oral quiz) to ask their questions (about what they read beforehand) or to read a passage that they did not understand (an interesting discussion usually flows from their questions); (5) use the rest of the class time to deliver the lesson as the teacher prefers; and (6) provide the usual references for additional reading, after class. Even if students do not know the exact answer (to the oral question) but can demonstrate that they read the material, they still receive 60 percent credit for answering the question. Pilot trials have shown that since the students know exactly what and how to study, it is easier for them to review the material in a productive way. They soon decide what areas they need to focus on (for example, vocabulary or meaning). This kind of freedom fosters autonomy in students and gives them responsibility for their own learning. Faculty participating in pilot experiences has reported increased participation in class, and students polled responded that previous reading improved their learning. Therefore, it was decided to start large-scale implementation in March 2012. Syllabus and materials for the 156 courses (offered in the first semester in 26 programs) were already available on the university Web site for new students enrolled, in January 2012. Deans, program directors, and professors have participated in three practical seminars. Hopefully, this innovation will drastically reduce the number of traditional lectures and will prompt improved learning experiences. 5 To limit confusion, only a few key changes will be implemented in each semester. Samples of incoming students in each first semester course will be reporting day-by-day (during the first three weeks) about the way the class starts (oral question and grading the response). Later on, program directors will talk with professors who forget to implement such a key change. The innovation will be implemented in ensuing semesters, with a similar sequence. The impact of this strategy will be carefully evaluated at the end of June 2012. It is hoped that the rest of Chilean universities will take advantage if proven successful. Throughout Latin America, university first-year dropout rates average at 50 percent. It is estimated that about one-third of the 10 million underachieving Latin American university students (lacking required skills and knowledge) could also benefit from this low-cost treatment and keep moving forward in their academic careers.
1,649.2
2015-03-25T00:00:00.000
[ "Economics", "Education", "Political Science" ]
Nna1, Essential for Purkinje Cell Survival, Is also Associated with Emotion and Memory Nna1/CCP1 is generally known as a causative gene for a spontaneous autosomal recessive mouse mutation, Purkinje cell degeneration (pcd). There is enough evidence that the cytosolic function of the zinc carboxypeptidase (CP) domain at the C-terminus of the Nna1 protein is associated with cell death. On the other hand, this molecule’s two nuclear localization signals (NLSs) suggest some other functions exist. We generated exon 3-deficient mice (Nna1N KO), which encode a portion of the N-terminal NLS. Despite the frameshift occurring in these mice, there was an expression of the Nna1 protein lacking the N-terminal side. Surprisingly, the pcd phenotype did not occur in the Nna1N KO mouse. Behavioral analysis revealed that they were less anxious when assessed by the elevated plus maze and the light/dark box tests compared to the control. Furthermore, they showed impairments in context-dependent and sound stimulus-dependent learning. Biochemical analysis of Nna1N KO mice revealed a reduced level of the AMPA-type glutamine receptor GluA2 in the hippocampal synaptosomal fraction. In addition, the motor protein kinesin-1, which transports GluA2 to dendrites, was also decreased. These results indicate that Nna1 is also involved in emotion and memory learning, presumably through the trafficking and expression of synaptic signaling molecules, besides a known role in cell survival. Introduction Nna1 (also known as CCP1 or Agtpbp1) was identified as a causative gene for Purkinje cell degeneration (pcd), an autosomal recessive disorder [1]. Defects in Nna1 also cause degeneration not only in Purkinje cells but also in many other neurons of the central nervous system, including cerebellar granule cells, neurons in the deep cerebellar nuclei, neurons in the inferior olive, as well as retinal photoreceptors, olfactory bulb neurons, and individual subpopulations of thalamic neurons [2][3][4][5][6]. Another report shows that mice deficient in this molecule exhibit male infertility due to defective spermatogenesis [7]. The mouse Nna1 gene encodes a protein of 1218 amino acids (aa), and the molecule contains multiple functional domains. The most investigated region is a zinc carboxypeptidase (CP) domain located in the C-terminus (843-1013aa); mutations in this region induce pcd [8,9]. There are two nuclear localization signals (NLSs) present at the N and C termini (144-151aa and 996-1016aa) [8,10], and studies using fusion proteins with green fluorescent protein have shown that Nna1 exists in both the nucleus and cytoplasm [10]. We have so far conducted various examinations to elucidate the function of the Nna1 protein in Generation of Nna1 N (Nna1 ∆Ex3 ) KO Mice Assuming that Nna1 mRNA has multiple transcription or translation start sites, we established mice floxed with exon 3 (Figure 1a,b), which encodes a part of the NLS located on the N-terminal side of Nna1. These mice were crossed with TLCN-Cre mice to generate exon 3-deficient mice (Figure 1a,c,d). We named these exon 3-deficient mice and previously generated exons 21, 22 deficient mice, Nna1 N KO and Nna1 C KO mice, respectively. To examine Nna1 expression in the cerebral cortex, cerebellum, and hippocampus of these mice, we performed Western blot analysis using the anti-Nna1 antibody against the Cterminus (1188-1218 aa). Several ladder-like bands, including a 150 kD band, were detected in WT mouse brains, while in Nna1 C KO mice, these bands were barely detected (Figure 1e). In the Nna1 N KO mice, a weak Nna1 band slightly smaller than the 150 kDa and ladder-like bands were observed. Northern blotting using the Nna1 C-terminal probe (exons 17 to 23) revealed stronger signals in the cerebrum, cerebellum, and hippocampus of Nna1 N KO mice than in WT mice (Supplementary Materials, Figure S1a). Since deletion of the exon 3 of the Nna1 gene causes a frameshift, the presence of truncated Nna1 proteins in the Nna1 N KO mice suggests that Nna1 mRNA has multiple translation start sites and escaped from NMD. The increased amount of mRNA in Nna1 N KO mice could be a compensatory upregulation for the loss of intact Nna1 proteins. Normal Morphology in Nna1 N KO Mice We next performed morphological analysis to investigate the brain phenotype of Nna1 N KO mice. Macroscopic images of the adult brains showed no cerebellar atrophy seen in Nna1 C KO and pcd mice (Figure 2a). We next performed histological analyses on the cerebellum by Calbindin-D28K immunohistochemistry, and there were no differences in the lobular and laminar structures between WT and Nna1 N KO mice (Figure 2b,c). Double staining for Car8 and VGluT1, markers in Purkinje cells and parallel fiber terminals, respectively, showed no discernible differences in dendritic arborization and the spine formation of Purkinje cells and in synapse formation with parallel fibers on distal spiny branchlets (Figure 2d,e). Immunohistochemistry for VGluT2, a marker for climbing fiber terminals, indicated no significant differences in the distribution and wiring of climbing fiber synapses on proximal shaft dendrites (Supplementary Materials, Figure S2a-c). In addition, Nissl staining of the hippocampus showed no significant differences in the histology and cellular alignment between WT and Nna1 N KO mice (Figure 2f . loxP sites are indicated by triangles. Red bars indicate 5′ or 3′ blot analysis. 5′ outer probe by ScaI, Neo probe and 3′ outer probe b for genomic DNAs from wild-type (WT) and chimeric mice. WT: l in each genotype). (c) Genotypes of WT or Nna1 N KO mice were id detecting exon 3 deletion (n = 3 mice in each genotype). (e) Weste antibody (Frontier Institute). Lanes 1, 4, and 7 indicate WT mice, KO mice, and Lanes 3, 6, and 9 indicate Nna1 C KO mice. In the Nn pared to full-length Nna1 protein were detected in Nna1 N KO mi ladder-like patterns (n = 3 mice in each genotype). . loxP sites are indicated by triangles. Red bars indicate 5 or 3 probe regions used for Southern blot analysis. 5 outer probe by ScaI, Neo probe and 3 outer probe by SpeI. (b) Southern blot analysis for genomic DNAs from wild-type (WT) and chimeric mice. WT: l, 3, 5, chimera: 2, 4, 6 (n = 3 mice in each genotype). (c) Genotypes of WT or Nna1 N KO mice were identified by PCR. (d) RT-PCR for detecting exon 3 deletion (n = 3 mice in each genotype). (e) Western blot analysis with anti-Nna1 antibody (Frontier Institute). Lanes 1, 4, and 7 indicate WT mice, Lanes 2, 5, and 8 indicate Nna1 N KO mice, and Lanes 3, 6, and 9 indicate Nna1 C KO mice. In the Nna1 N KO brain, lower bands compared to full-length Nna1 protein were detected in Nna1 N KO mice (asterisks) and faint bands in ladder-like patterns (n = 3 mice in each genotype). Nna1 N KO Mice Are Impaired in Emotional and Memory Learning In the open-field test, Nna1 N KO mice were more hyperactive and spent much more time in the center of the open-field area compared with WT mice (Figure 3a-c), while there was no significant difference in their movement speed (Figure 3d). In the next light-dark transition test, Nna1 N KO mice made a more significant number of transitions between the light and dark areas than WT mice (Figure 4a-c). However, there was no significant difference in the total distance traveled (Figure 4b,d). There was no significant difference in the time spent in the light area of the box between WT and Nna1 N KO mice, but Nna1 N KO mice seemed to spend more time in the dark area with more motility than the wild-type mice, indicating a trend of hyperactivity (Figure 4b,e). Furthermore, we performed the elevated plus-maze test to evaluate anxiety (Figure 5a). Although there was no significant difference in the total distance traveled (Figure 5b Mol. Sci. 2022, 23, x FOR PEER REVIEW Figure 2. Immunohistochemical observation in the cerebellum and hippocam (a) Macro image of the brains from Nna1 N KO mice and WT mice in the adult genotype). No significant atrophy on the parasagittal image of the cerebe tween Nna1 N KO and WT mice. (b,c) Calbindin-D28K IHC on parasagittal each genotype). No significant expression difference was observed in the cere KO and WT mice (n = 3 mice in each genotype). (d,d′,d″) and (e,e′,e″) Car8 a also showed no significant difference between KO and WT mice. (f,f′,f″) and on parasagittal sections (n = 3 mice in each genotype). No significant morpho difference was observed between Nna1 N KO and control mice. Scale bars: 2 Scale bars: 0.5mm in (b,c,f,g), Scale bars: 5 µm in (d′,d″). showed more transition latency than WT mice. (d) No significant difference wa Nna1 N KO and WT mice in the travel distance in the boxes. (e) Nna1 N KO mic time in the dark box than the control mice. WT, n = 6; Nna1 N KO, n = 10, * p < 0 All values presented are means ± SEM. "ns" means not significant. Finally, we performed the contextual-and cued-fear conditionin the effect of N-terminal deletion of Nna1 N KO on learning memory. Ad weeks were used for the conditioning test on day 1, the contextual test cued test on day 3 ( Figure 6a). For the conditioning test, mice were a freely for 3 min and then presented with 55 dB white noise as a conditio for 20 s, including the last 2 s of a foot shock (0.2 mA, 2 s) as an uncon (US). Then similar stimulation, 60 s of CS, including the last 2 s of US, w ( Figure 6b). There was no noticeable difference in response to the stimul Nna1 N KO mice ( Figure 6c). However, in a spatial-dependent learnin considerable differences in freezing behavior between WT and KO m Furthermore, in a sound-dependent learning test conducted the next da in Nna1 N KO mice was lower than in WT mice (Figure 6f,g). Finally, we performed the contextual-and cued-fear conditioning tests to examine the effect of N-terminal deletion of Nna1 N KO on learning memory. Adult mice aged 8-10 weeks were used for the conditioning test on day 1, the contextual test on day 2, and the cued test on day 3 ( Figure 6a). For the conditioning test, mice were allowed to explore freely for 3 min and then presented with 55 dB white noise as a conditioned stimulus (CS) for 20 s, including the last 2 s of a foot shock (0.2 mA, 2 s) as an unconditioned stimulus (US). Then similar stimulation, 60 s of CS, including the last 2 s of US, was repeated twice (Figure 6b). There was no noticeable difference in response to the stimuli between WT and Nna1 N KO mice ( Figure 6c). However, in a spatial-dependent learning test, there were considerable differences in freezing behavior between WT and KO mice (Figure 6d,e). Furthermore, in a sound-dependent learning test conducted the next day, the freezing rate in Nna1 N KO mice was lower than in WT mice (Figure 6f,g). Nna1 N KO Mice Have a Different Subunit Composition of Glutamate Receptors in the Hippocampus Since hippocampal function is known to be associated with memory deficits, amined AMPA-type glutamine receptor subunit expression, which is closely rel hippocampal-dependent behavior [16]. Immunohistochemical analysis reveal creased tendency of GluA1 and GluA2 expressions (Figure 7a). Western blots of hippocampal lysates indicated significantly increased GluA2. At the same time, showed an increasing trend but no significant difference (Figure 7b). Interestingly glutamylated tubulin, whose side chains are shortened by the peptidase activity o protein, was significantly increased (Figure 7c Nna1 N KO Mice Have a Different Subunit Composition of Glutamate Receptors in the Hippocampus Since hippocampal function is known to be associated with memory deficits, we examined AMPA-type glutamine receptor subunit expression, which is closely related to hippocampal-dependent behavior [16]. Immunohistochemical analysis revealed increased tendency of GluA1 and GluA2 expressions (Figure 7a). Western blots of crude hippocampal lysates indicated significantly increased GluA2. At the same time, GluA1 showed an increasing trend but no significant difference (Figure 7b). Interestingly, polyglutamylated tubulin, whose side chains are shortened by the peptidase activity of Nna1 protein, was significantly increased (Figure 7c, p < 0.05). Furthermore, we observed a mild increase in GluA2 in the Nna1 N KO cerebellum (Supplementary Materials, Figures S3 and S4). Since a change in AMPA-type glutamine receptors content in the synapses may lead to changes in synaptic transmission in the Nna1 N KO hippocampus, we measured the GluA1 and GluA2 in the synaptosome fraction. Surprisingly, there was a significantly decreased concentration of GluA2 in the synaptosome fraction of the Nna1 N KO hippocampus, whereas there was no change in GluA1 (Figure 7d). Furthermore, when kinesin-1 was measured, which is involved in the transport of GluA2-containing vesicles, its concentration was higher in the crude fraction and lower in the synaptosome fraction, like GluA2 (Figure 7d,e). These results suggest that in Nna1 N KO mice, there are defects in the transport of GluA2 vesicles by kinesin-1 complexes. concentration was higher in the crude fraction and lower in the synaptosome fraction, like GluA2 (Figure 7d,e). These results suggest that in Nna1 N KO mice, there are defects in the transport of GluA2 vesicles by kinesin-1 complexes. . Note that significantly increased expression of GluA2 was observed in the crude fraction (crude) of Nna1 N KO mice. Furthermore, a significant reduction was observed in the synaptosome fraction (synapto) as well kinesin-1. There was no significant modification of GluA1. All values presented are means ± SEM from 3 experiments. * p < 0.05, Student's t-test. Β-Actin is the loading control. Discussion In this study, we generated Nna1 ΔEx3 KO (Nna1 N KO) mice, in which the N-terminal exon 3 of Nna1 was deleted to analyze the phenotype. We found that Nna1 N KO mice showed no ataxia as seen in Nna1 null [14] or pcd mice lines and no difference in body Note that increased expression of GluA2 and polyG was observed in the Nna1 N KO mice. All values presented are means ± SEM from 3 experiments. * p < 0.05, Student's t-test. (d,e) kinesin-1 and GluA2 showed a similar expression pattern from the crude fraction (crude) to the synaptosome fraction (synapto). Note that significantly increased expression of GluA2 was observed in the crude fraction (crude) of Nna1 N KO mice. Furthermore, a significant reduction was observed in the synaptosome fraction (synapto) as well kinesin-1. There was no significant modification of GluA1. All values presented are means ± SEM from 3 experiments. * p < 0.05, Student's t-test. B-Actin is the loading control. Discussion In this study, we generated Nna1 ∆Ex3 KO (Nna1 N KO) mice, in which the N-terminal exon 3 of Nna1 was deleted to analyze the phenotype. We found that Nna1 N KO mice showed no ataxia as seen in Nna1 null [14] or pcd mice lines and no difference in body weight compared to WT mice. Nna1 mRNA is frameshifted by the exon 3 deletion and was expected to be degraded by NMD. However, Northern blot analysis using the Nna1 probe (exons 17-23) showed two prominent bands, similar to the WT (Supplementary Materials, Figure S1). Western blotting with a Nna1 antibody against the C-terminal region showed multiple bands with a maximum molecular weight of approximately 150 kD in the WT mice. In the Nna1 N KO, there was a band with slightly lower molecular weight than the WT band (~150 kD) and multiple bands in a ladder-like pattern (Figure 1). In the Nna1 C KO mice lacking exon 21, 22 encoding carboxypeptidase domain, neither Nna1 mRNA nor protein was detected [14], indicating that mRNA degradation occurs in the Nna1 C KO mice by NMD. However, in the Nna1 N KO mice, the amount of Nna1 mRNA was increased in some brain regions and truncated Nna1 protein was observed. These findings raise some possibilities: (1) the Nna1 gene has multiple sites for transcription start, (2) multiple isoforms with exon skipping, and (3) Nna1 mRNA has multiple sites for translation start. Furthermore, the Nna1 N KO mice lack the N-terminal side of the Nna1 protein but retain the carboxypeptidase domain of the C-terminal region, enabling us to evaluate the function of the N-terminal side of the Nna1 protein. Notably, we cannot exclude the possibility that the low expression of the truncated Nna1 proteins, in addition to the loss of the N-terminus of the Nna1 protein, contributes to the phenotype of the Nna1 N KO mice. Although the Nna1 N KO mice showed no severe cerebellar phenotypes, such as Purkinje cell death, they exhibited a variety of phenotypes in systematic behavioral analyses, such as hyperactivity, reduced anxiety-like behavior, and impaired memory learning. These findings indicate that Nna1 is involved not only in neuronal survival but also in higher brain functions related to emotion and memory. Furthermore, both contextual and cued fear conditioning tests showed impaired learning ability, suggesting that Nna1 was involved in altered synaptic transmission in the hippocampus (Figure 7d). Thus, we investigated the expression level of AMPA-type glutamate receptors in the hippocampus. Immunohistochemistry showed that GluRA1 and GluRA2 were elevated, while GluRA3 and GluRA4 were unchanged ( Figure 7a); detailed Western blotting analysis revealed that only GluA2 was significantly increased in the hippocampal crude fraction. In the synaptosomal fraction, GluA2 was significantly decreased in concentration, but GluA1 was unchanged. We next measured the amount of kinesin-1 that is involved in AMPA receptor transport [17] and found that, as with GluA2, its concentration was high in the crude fraction and low in the synaptosome fraction (Figure 7b-e), suggesting that there is some impairment occurring in kinesin-1 and GluA2 complex transport. Therefore, we examined the amount of polyglutamylated tubulin, which is associated with Nna1 [18], and found that polyglutamylated tubulin was significantly increased (Figure 7b,c). These findings suggest that Nna1 is involved in GluA2 transport to the synapse by regulating the polyglutamine content of tubulins. Notably, a recent study has reported patients with global developmental delay and hypotonia with a novel homozygous c.3293G>A mutant of the NNA1 gene in a consanguineous family [19]. The post-translational modifications (PTMs) of tubulin occur in the microtubules (MTs) of neurons and play essential roles in the dynamics and organization of MTs, thus exerting a direct effect on cell functioning, which is critical in human health and disease [20]. Nna1 is a cytoplasmic carboxypeptidase that modifies the C-terminal tail of tubulins, such as polyglutamylation, detyrosination, and generation of ∆2-tubulin [21,22]. The carboxyterminus of tubulin is polyglutamylated and is exposed on the outer surface of tubulin during microtubule assembly. The length of the polyglutamate side chain on tubulin is vital for neuronal stability and survival [23]. In this study, we showed that the hippocampus of Nna1 N KO mice expresses higher levels of polyglutamylated tubulin than WT mice (Figure 7b). This observation indicates that the N-terminal loss may result in reduced polyglutamate pruning of Nna1. In addition, the "modified" tubulin rail, where motor protein kinesin-1 binds, makes it difficult for kinesin-1 to move, presumably related to the reduced transport rate of GluA2. There are two possible reasons for the increased expression of GluA2 in the hippocampal homogenates in the Nna1 N KO mice. One is that the transport of GluA2 from the soma to the synaptic terminals is reduced due to a dysfunction in the axon transport and accumulates in the soma; N-terminal loss could ultimately reduce GluA2 transport to the synapse and induce qualitative changes in synaptic transmission. Another is that the decrease in GluA2 in synaptosomes may provide feedback to the soma to produce more GluA2 and compensate for the decrease in the synapse terminals. The present study demonstrates that Nna1 functions not only in neuronal survival but also in emotion and memory, possibly via synaptic transmission by polyglutamate modification of tubulins. Generation of Nna1 N Knockout (KO) Mice Male chimera mice were generated by injection of recombinant ES cells into eight-cell stage embryos from ICR mice (MGI:5462094, SLC Japan), and then heterozygous mice (Nna1 flox(neo+) ) were obtained by natural mating with C57BL/6N female mice (MGI:5657107, Charles River Japan). To generate a Nna1 knockout allele lacking exon 3 (Nna1 ∆Ex3 ), heterozygous F1 mice were crossed with TLCN-Cre mice (MGI:3042494) [26,27], which ubiquitously express Cre recombinase. Double heterozygous mice (TLCN-Cre; Nna1 flox(neo+) ) were crossed with C57BL/6N mice to generate the Nna1 N KO allele (Nna1 ∆Ex3 ). Finally, Nna1 N KO mice (Nna1 ∆Ex3/∆Ex3 mice) were generated by crossing heterozygous pairs (Nna1 ∆Ex3/wt mice). All mice used in this study were maintained in the C57BL/6N background. Animal care and experimental protocols were approved by the Animal Experiment Committee of Niigata University and were carried out under the Guidelines for the Care and Use of Laboratory Animals of Niigata University (approved number: SA00733, SA01091). Animals were handled under the guidelines established by the Institutional Animal Care and Use Committee of Niigata University. The following measures were taken to minimize animals' suffering during experiments: restlessness, vocalizing, loss of mobility, failure to groom, open sores/necrotic skin lesions, guarding (including licking and biting) a painful area, and a change in body color. If these signs were observed, they were excluded from further participation and treated appropriately according to the approved protocol. The mice used were 3-50 weeks old and had 8-29 g body weight. Time-point of each experiment was described in each result. No randomization was performed in this study. Genotyping PCR Genotyping by PCR was performed as follows. Genomic DNA was extracted from the tips of the tails of wild-type and Nna1 mutant mice: the tail tissues were incubated with 0.025 N NaOH and 2 mM EDTA for 30 min at 100 • C and then mixed with an equal volume of 40 mM Tris-HCl (pH 8.0) at around 20 • C. The extracted DNA was used as a template for the PCR reaction using the Nna1-lox forward and Nna1-lox reverse primers (Table 1). PCR was performed using Quick Taq HS Dye Mix (Toyobo, Osaka, Japan) under the following PCR conditions: 95 • C for 30 s, 30 cycles of 95 • C for 10 s, 60 • C for 30 s, and 72 • C for 1 min, followed by 72 • C for 5 min. The PCR products were separated by agarose gel electrophoresis to identify the DNA bands; 1050 bp and 560 bp were amplified from the WT and Nna1 KO allele, respectively. Behavior Tests Open field test: Locomotor activity was measured using an open field test, similar to the previous one [30]. The chamber was made of a square platform with 50 cm × 50 cm × 40 cm (O'Hara and Co. Tokyo, Japan) and illuminated with a light intensity of 100 lux. Mice were placed in the corner of the field and left for 10 min to allow free exploration. During the test, the total distance traveled and the time spent in the central region were recorded and automatically calculated using Image OFCR software (O'Hara and Co., LTD. Tokyo, Japan; see 'Image analysis for behavioral tests'). Light/dark transition test: A light/dark transition test was conducted as previously described [31]. The apparatus comprised a cage (21 cm × 42 cm × 25 cm) divided into two sections of equal size by a partition with a door (O'Hara Co.). One chamber was brightly illuminated (light chamber), while the other was not (dark chamber). Mice were placed into the dark chamber and allowed to move freely between the two chambers with the door open for 10 min. The total number of transitions, time spent in each compartment, first latency of movement to the light chamber, and distance traveled were recorded automatically using Image LD software (O'Hara and Co., LTD. Tokyo, Japan; see 'Image analysis for behavioral tests'). Elevated plus maze test: Elevated plus maze test was performed as described previously [32]. The elevated plus maze consisted of two open arms (25 cm × 5 cm) and two enclosed arms of the same size, with 15 cm high transparent walls. The arms and central square were made of white plastic sheets, elevated to 60 cm above the floor. To avoid animals falling from the apparatus, 3-mm high Plexiglas sides were used for the open arms. Arms of the same type were arranged on the opposite side. This device was set up under low illumination (center square 100 lux). Each mouse was placed in the central square of the maze (5 cm × 5 cm), facing one of the closed arms, and then behavior was recorded during a 10-min test period. The number of entries and the time spent on open and enclosed arms were recorded. For data analysis, we recorded the percentage of entries onto open arms, the staying time on open arms (seconds), the number of total entries, and the total distance traveled (centimeters). Data acquisition and analysis were performed automatically using Image EP software (O'Hara and Co., LTD. Tokyo, Japan; see 'Image analysis for behavioral tests'). Contextual and cued fear conditioning test: Fear conditioning was performed as described previously [33]. Each mouse was placed in a test chamber (33 cm × 25 cm × 28 cm) and allowed to explore freely for 3 min. Then, a 55-dB white noise, which served as the conditioned stimulus (CS), was presented for 20 s, and during the last 2 s of CS presentation, a foot shock (0.2 mA, 2 s), which served as the unconditioned stimulus (US) to mice was given. Two more CS-US pairings were presented with an inter-stimulus interval of 40 s. Twenty-four hours after the conditioning, a contextual test was performed in the same chamber without CS or US stimulus. Forty-eight hours after the conditioning, a cued fear memory was tested in a triangular chamber (33 cm × 33 cm × 32 cm) made of opaque white plastic and allowed to explore freely for 1 min. Subsequently, each mouse was given CS presentation for 3 min. In each session, data acquisition, and control of stimuli (i.e., shocks) were automatically performed, and the percentage of time spent freezing was calculated using Image FZ software (O'Hara and Co., LTD. Tokyo, Japan; see 'Image analysis for behavioral tests'). Subcellular Fraction and Western-Blot Analysis Subcellular fractions were prepared following Carlin's method [34] with minor modifications. All procedures were performed at 4 • C. Briefly, WT and Nna1 N KO mice were decapitated after cervical dislocation, and the cerebellum and the hippocampus were immediately dissected, removed, and immersed into the homogenization buffer (320 mM sucrose and 5 mM EDTA, containing complete protease inhibitor cocktail tablet (Complete Mini; Roche, Mannheim, Germany) and centrifuged at 1000× g for 10 min. The supernatant was centrifuged at 12,000× g for 10 min, and the resultant pellet was re-suspended in homoge-nization buffer at the P2 fraction. The P2 fraction was layered over a 1.2 M/0.8 M sucrose gradient and centrifuged at 90,000× g for 2 h. The synaptosome fraction was collected from the interface. The protein concentration was determined using BCA Protein Assay Reagent (Thermo Fisher Scientific Inc. Waltham, MA, USA). An equal volume of SDS sample buffer [125 nm Tris-Cl (pH 6.8), 4% SDS, 20% glycerol, 0.002% BPB, 2% 2-mercaptoethanol] was added to the sample fractions and boiled for 5 min at 95-100 • C. Dissected brains were post-fixed for 2 h in the same fixative at 4 • C. After dehydration, the brain was embedded with paraffin, and sagittal sections 5 µm thick were stained with Cresyl Violet (Nissl) (Invitrogen). Image Analysis for Behavior Tests The application software used for the behavioral studies (Image OFCR, LD, EP, and FZ) was based on the public domain NIH Image program (developed by the U.S. National Institutes of Health and available at http://rsb.info.nih.gov/nih-image, accessed on 1 june 2013) and ImageJ program (http://rsb.ingo.nih.gov/ij/, accessed on 1 june 2013), with some modification for each test (available through O'Hara and Co., Japan) Statistical Analysis Results were presented as means ± SEM (standard error of the mean). For Western blotting, statistical analyses were performed using Student's t-test. For mouse behavior tests, statistical analyses were performed using Student's t-test. Values in the graphs are expressed as means ± SEM (standard error of the mean). Statistical significance was set at a value of p < 0.05. Sample calculation and tests for outliers were not performed. Institutional Review Board Statement: The study was conducted in accordance with the Animal Experiment Committee of Niigata University and was carried out following the Guidelines for the Care and Use of Laboratory Animals of Niigata University (approved number: SA00733, SA01091). Data Availability Statement: All data generated or analyzed during this study are included in the manuscript and Supporting Files. Raw data have been provided for mean population data shown in figures.
6,643
2022-10-26T00:00:00.000
[ "Biology" ]
CO emission survey of asymptotic giant branch stars with ultraviolet excesses Context. The transition from the spherically symmetric envelopes around asymptotic giant branch (AGB) stars to the asymmetric morphologies observed in planetary nebulae is still not well understood, and the shaping mechanisms are a subject of debate. Even though binarity is widely accepted as a promising option. Recently, the presence of ultraviolet excesses in AGB stars has been suggested as a potential indicator of binarity. Aims. Our main goals are to characterise the properties of the circumstellar envelopes (CSEs) around candidate AGB binary stars, specifically those selected based on their UV excess emission, and to compare these properties with those derived from previous CO-based studies of AGB stars. Methods. We observed the 12CO ($J$=1-0) and 12CO ($J$=2-1) millimetre-wavelength emission in a sample of 29 AGB binary candidates with the IRAM-30 m antenna. We explored different trends between the envelope parameters deduced and compared them with those previously derived from larger samples of AGB stars found in the literature. Results. We derived the average excitation temperature and column density of the CO-emitting layers, which we used to estimate self-consistently the average mass-loss rate and the CO photodissociation radius of our targets. We find a correlation between CO intensity and IRAS 60${\mu}$m fluxes, revealing a CO-to-IRAS 60${\mu}$m ratio lower than for AGB stars and closer to that found for pre-planetary nebulae (pPNe). Conclusions. For the first time we have studied the mass-loss properties of UV-excess AGB binary candidates and estimated their main CSE parameters. The different relationships between 12CO and IRAS 60{\mu}m, with NUV and FUV are consistent with an intrinsic origin of NUV emission, but potential dominance of an extrinsic process (e.g. presence of a binary companion) in FUV emission. In the past two decades, binarity has emerged as a widely accepted and decisive factor in shaping PPNe and planetary nebulae (PNe) (e.g.De Marco 2009).The prevalence of binarity in AGB stars can be attributed to its common occurrence among main-sequence stars, as established by Duquennoy & Mayor (1991).In addition, recent observations have provided strong evidence for the presence of a close binary companion for the central star in many PNe (Hillwig 2018), reinforcing the idea that binarity should be common in the AGB phase.Nevertheless, identifying binarity is not possible for most AGB stars by classical methods (i.e.transits or radial velocities) due to their high luminosity and the intrinsic variability produced by their pulsations.Recent observational studies have identified asymmetries in AGB CSEs, including enhanced density equatorial structures and large-scale arcs or spiral patterns, that are consistent with theoretical predictions from binary models (see e.g.Soker 1994;Kim & Taam 2012;Decin et al. 2020).Sahai et al. (2008) were the first to hypothesise that UV emission excesses in AGB stars (i.e.UV emission considerably higher than the expected photospheric emission of single AGB stars) could provide an indication of binarity due to the fact that these excesses would be produced by photospheric and/or coronal emission from a hotter companion (T eff ≳ 5500-6000 K and/or accretion onto the companion.They identified UV emission as a common feature in a small sample of AGB stars in the Hipparcos catalogue with indications of binarity using the [GALEX] the Galaxy Evolution Explorer (GALEX, Martin et al. 2005).In their sample of 21 sources, 19 were detected with near-ultraviolet (NUV) excesses (hereafter nuvAGB stars); 9 of these 19 were detected with far-ultraviolet (FUV) excesses (hereafter fuvAGB stars) as well.In addition, Ortiz et al. (2019) showed that these excesses correspond to continuum flux instead of emission lines.Hereafter we refer to AGB stars detected in either the NUV or FUV (or both) as uvAGBs.Sahai et al. (2011Sahai et al. ( , 2018) ) studied in deeper detail the case of Y Gem, an fuvAGB whose extreme UV flux, significant time variability and infall-outflow signatures seen in UV line spectra, confirmed the presence of an accretion disk around a companion star. An increasing number of fuvAGBs have been also observed to exhibit X-ray emission, which in principle could potentially be attributed to coronal activity or accretion processes involving a companion (Ramstedt et al. 2012).The idea of accretion as a plausible mechanism for generating X-rays has gained support from recent investigations (e.g.Sahai et al. 2015;Ortiz & Guerrero 2021). Previous studies have provided valuable insights into the nature of UV emission in AGB stars.Sahai et al. (2016) suggested a possible connection between FUV and X-ray variability and accretion processes.Additionally, Ortiz & Guerrero (2016) supported the hypothesis of binarity as a primary source of UV emission, while not ruling out chromospheric activity.In contrast, Montez et al. (2017) proposed that NUV emission in AGB stars could be attributed to chromospheric and/or photospheric activity rather than binarity, and the lack of detectability might be due to absorption by the interstellar medium (ISM) or the circumstellar medium (CSM).More recently, Sahai et al. (2022), based on modelling studies, concluded that AGB stars with R FUV/NUV ≳ 0.06 are likely associated with accretion-dominated UV emission, while those with R FUV/NUV ≲ 0.06 may have UV emission mainly originating from chromospheric and/or lowlevel accretion processes. Therefore, uvAGBs are ideal binary candidates and excellent sources to search for emerging asymmetries or other companioninduced changes in the properties of their envelopes.Moreover, uvAGBs may allow us to identify and characterise AGB companions, and thus test the binary evolution models. In this paper we present the results from a study of the CSEs of a sample of uvAGB binary candidates based on singledish observations of carbon monoxide (CO) at millimetrewavelengths.We compare the main properties of our sample of uvAGB stars with those derived from similar studies of larger samples of AGB stars.In Sect. 2 we introduce and describe our sample.The observations and main observational results are given in Sect. 3 and 4, respectively.The analysis of the data, which includes population diagram analysis to derive the massloss rate and the mean excitation temperature of the envelopes around our sample of uvAGBs, as well as the search for correlations between different envelope properties, is presented in Sect. 5.In Sect.6 we make a comparison between stellar and envelope estimated parameters in order to identify correlations in this sample or deviations from well-known trends previously studied in common AGB stars.Our results are interpreted and discussed in Sect.8 and a summary of our main conclusions is provided in Sect.9. The sample The sample of 29 uvAGB stars observed in this study are listed in Table 1 together with their equatorial coordinates and Gaia DR3 (Gaia Collaboration et al. 2021) parallaxes and distances (D).In addition, information about variability and spectral classification, pulsation period (P), chemical composition, luminosity (L bol ) and ISM reddening is presented in Table 2. Accurate distance estimation is a complicated subject for AGBs.While Very Long Baseline Interferometry (VLBI) parallax measurements of maser emissions remain the most reliable technique to date, they are limited to a small subset of AGBs (see (a) From the General Catalogue of Variable Stars (GCVS, Samus' et al. 2017). (b) From the SIMBAD astronomical database (Wenger et al. 2000). (c) From integration of the extinction-corrected spectral energy distributions (SEDs) (see appendix B). (d) Interstellar medium reddening has been obtained with GALEXtin (see Amôres et al. 2021) with the extinction map from Bayestar19 (see Green et al. 2019).Andriantsaralaza et al. 2022).Gaia DR3 offers a broader alternative, yet it is crucial to note that Gaia's distance uncertainties may be underestimated by up to a factor of five, as demonstrated by Andriantsaralaza et al. (2022).To enhance the statistical reliability of our analysis, we ensured that over 50% of our sample met stringent criteria for reliable parallax measurements, including low brightness (G<8.0) and low RUWE (<1.4), as outlined in El-Badry et al. (2021) and Andriantsaralaza et al. (2022).Nevertheless, it is important to acknowledge that uncertainties associated with distances (Table 1) and derived luminosities (Table 2) are likely underestimated. Our sample of AGB binary candidates was derived from a larger sample of AGBs with NUV detections found in the GALEX MAST archive (Conti et al. 2011) following a similar approach as in previous studies by Sahai et al. (2008) and Sahai et al. (2015).The GALEX sample of ultraviolet excess AGB stars is substantial, comprising over 700 objects (Sahai et al. 2022), many of which have multiple observations revealing significant variability in the ultraviolet (Sahai et al. 2022;Montez et al. 2017).To identify the most promising binary AGB star candidates, we have included a large fraction of stars exhibiting excesses in the FUV band (25 sources).Additionally, six of our sources have been detected in X-rays: RR Umi (Hunsch et al. 1998), R Uma and T Dra (Ramstedt et al. 2012), EY Hya and Y Gem (Sahai et al. 2015), and BD Cam (Lima et al. 2022). In this section, we compare the properties of our sample of 29 uvAGBs with those of the complete uvAGBs sample identified to date as well as with the initial sample of 18381 galactic AGB stars from the catalogue published by Suh (2021), which represents one of the most comprehensive collections of AGB stars published to date. A complete sample of uvAGB stars has been derived after crossmatching the GALEX MAST catalogue with that by Suh (2021).We used the TOPCAT tool (Taylor 2005) and used a searching window of 5 ′′ radius.We identified that 53.5% of the sources in the Suh (2021) catalogue are included in the GALEX NUV sky coverage, from which 10.4% were detected in this band.On the other hand, the number of sources from the Suh (2021) catalogue that are in the GALEX FUV sky coverage is 8.0%, with a detection rate in this band of only 4.5%. It is important to note that the statistical analysis of nu-vAGBs and fuvAGBs is affected by the incomplete sky coverage of GALEX observations.Specifically, there is a lack of GALEX observations near the galactic centre, where a significant portion of AGB stars are located.This incomplete coverage hinders comprehensive statistical studies on the prevalence of nuvAGBs and fuvAGBs within the entire AGB population.In fact, 46.5% of the sample was not observed in the NUV band, and this ratio increases to 92.0% in the FUV band. To assess the representativeness of our sample within the uvAGB class, we conducted a comparison of their properties with the complete uvAGB sample obtained by cross-matching the GALEX catalogue with the catalogue by Suh (2021).This analysis enables us to identify similarities and differences between AGBs, nuvAGBs, and fuvAGBs, providing insights into the characteristics of these subgroups. The distribution of IRAS 60µm flux for nuvAGBs and fu-vAGB sources closely resembles that of AGB stars in the Suh catalogue (Fig. 1).The main distinction is that the uvAGB subclass has fewer stars, and there is a relative deficiency of objects with extremely high IRAS 60µm fluxes within this subclass.Our sample of 29 uvAGBs, which has a higher proportion of fu-vAGBs compared to nuvAGBs, exhibits a distribution similar to that of fuvAGBs.Thus, it appears that our sample can be regarded as representative of fuvAGBs and covers low and intermediate IRAS 60µm fluxes (<40 Jy).However, it is worth noting that our sample lacks objects with high IRAS 60µm fluxes (>40 Jy).A discussion of the IRAS 60µm luminosity (i.e.taking into account the distances to our targets) is presented in Sect.7. The majority of nuvAGBs and fuvAGBs are found in regions I, II, IIIa, or VII.Specifically, fuvAGBs tend to be concentrated in areas characterised by low values of [25]-[60] and [12]-[25] IRAS colours, primarily in regions I and II.This indicates that UV emissions are more prevalent in AGB stars with thin envelopes (see e.g.van der Veen & Habing 1988).Given that our uvAGB sample primarily consists of fuvAGBs, their distribution also aligns with the concentration in regions I and II observed in the Suh (2021) catalogue of fuvAGBs. The sample used in this work includes objects with different types of variability according to the General Catalogue of Variable Stars (GCVS, Samus' et al. 2017) ity properties of our FUV+NUV sample with the FUV+NUV emitting AGB stars of the Suh (2021) catalogue, and find that the proportions of M, SRs and LBs are similar.However, for AGB stars with only NUV emission in the Suh (2021) catalogue, we find the proportions of SRs and LBs are much lower (23% and 13% respectively), and that Miras are most abundant (62%).FUV+NUV emission is more likely associated with binarity (and related accretion activity) than only NUV emission as shown in Sahai et al. (2022).Thus, our finding above that there are larger fractions of SRs and LBs in a sample of AGB stars with FUV+NUV emission compared to one with only NUV emission, suggests that SR and LB variability in AGB stars is an indicator of binarity. However, further investigations using larger samples are needed to explore this tentative result more thoroughly, taking into account the observational bias that large samples likely cover a larger distance range.This makes it more difficult to detect and classify variability as well as to detect UV emission. There are some targets in our sample that show low luminosities ≲ 3000L ⊙ (see table 2).These low values are not expected for stars on the tip of the AGB.Therefore, it is possible that some of these low-luminosity sources are still on the Red Giant Branch (RGB) or in the early-AGB phase, and therefore have relatively low mass-loss rates with the result that they have not developed a detectable CSE (in millimetre-wave CO lines) as yet.Some of these low luminosities could potentially be attributed to distance inaccuracies. Observations The observations presented in this paper were carried out with the IRAM-30 m radiotelescope (Pico Veleta, Granada, Spain) using the Eight MIxer Receiver (EMIR, Carter et al. 2012) spectral line receiver.Observations were taken in two observational campaigns in 2009 and 2010. We simultaneously observed the 12 CO (J=2-1)(230.538GHz) and 12 CO (J=1-0)(115.271GHz) rotational transitions with E090 and E230 receivers operated in dual sideband (2SB) mode and dual (H+V) polarisation.Observations were performed with different backends simultaneously.In this work, we present only the spectra taken with the VErsatile SPectrometer Array (VESPA), which provides the best spectral resolution of ∼0.3 MHz (∼0.4 and 0.8 km s −1 at 1 and 3 mm, respectively) over a spectral bandwidth of ∼216 MHz. Observations were performed in wobbler-switching mode with a wobbler throw of 120 ′′ and frequencies of 0.5 Hz.Calibration scans on the standard two load system were taken every ∼10-15 min.Pointing and focus were checked regularly on nearby continuum sources.After pointing corrections, the typical pointing accuracy was ∼2 ′′ -5 ′′ .On-source integration times are in the range 1-7.5 h (with a mean value of 1.7 h) per target. The observation were reduced using CLASS 1 following standard procedures, which include killing bad channels (if any), subtracting baseline, and averaging individual good-quality scans to produce the final spectra.For the sources observed in both the 2009 and 2010 observation campaigns (Y Crb, RW Boo, RU Her, DF Leo, OME Vir), the two epoch spectra were averaged after weighting by the inverse square of the rms noise of each individual spectrum.We present the spectra in the corrected antenna-temperature (T * A ) scale, which can be converted to main-beam temperature (T MB ) applying the wellknown relation T MB =T * A /η eff , where η eff is the main-beam efficiency.For the IRAM-30 m antenna η eff ∼0.83 at 115 GHz and η eff ∼0.65 at 230 GHz.The point source sensitivity (S/T * A , i.e. the K-to-Jy conversion factor) is ∼6.01 and ∼7.89 at 115 and the IRAM-30 m antenna is 21.′′ 4 at 3mm and 11. ′′ 7 at 1mm2 . The noise of the spectra for a velocity resolution of 0.8 km s −1 is in the range rms=7.7-95mK (with a median value of 12 mK) at 1mm and rms=5-30 mK (with a median value of 11 mK) at 3mm.The uncertainty of the relative calibration of our observations was estimated by observing CW Leo, a wellstudied AGB star with intense CO emission that is commonly used as a line calibration standard.The relative calibration uncertainty has also been checked by comparing the 12 CO (J=2-1) and 12 CO (J=1-0) spectra of five of our targets observed in 2009 as well as in 2010.Based on this, we estimate total line flux uncertainties of ∼20-25% at 1mm and of ∼10-15% at 3mm. Observational results We have searched for 12 CO (J=1-0) and 12 CO (J=2-1) emission in a sample of 29 uvAGB binary candidates.Line emission was detected in 15 targets, 11 of them in both transitions.The observed spectra are presented in Figs. 3 and 4 for 12 CO (J=2-1) and 12 CO (J=1-0) respectively.The main line measurements directly obtained from the observations are summarised in Table A.1. Line profiles and expansion velocities The line profiles observed in our targets exhibit a range of shapes, including (pseudo) parabolic, double-horned, flat, triangular, Gaussian-like profiles, etc.Some objects also show profile asymmetries, such as horns with different intensities or quasi-triangular profiles with one side, in some cases the redshifted one, being less intense.In the case of RW Boo, a doublecomponent profile has been found as it had been previously reported by Díaz-Luis et al. (2019).The observed line profiles in our predominantly O-rich targets do not closely resemble the canonical profiles expected for the so-called standard CSE model, which assumes isotropic flow at a constant velocity.In the standard model, the line profile can vary between parabolic, flat, and double-horned shapes, depending on the line optical depth and the size of the envelope relative to the telescope beam (Olofsson et al. 1993).However, this deviation from the idealised CSE model is a known property of O-rich CSEs, as discussed in previous studies by Margulis et al. (1990); Knapp et al. (1998).These deviations can be attributed to geometric or velocity effects in the CSEs or multiple mass-loss components. The 12 CO (J=2-1) and 12 CO (J=1-0) line profile have enabled us to estimate two key line parameters: the integrated flux and the average expansion velocities of the envelopes.These parameters were determined by fitting a shell-type profile using the software CLASS3 where the line shape is parametrised as where A is the integrated area, ν 0 is the central frequency, ∆ν is the full width at zero intensity, and H is the horn-to-centre ratio of the line.The equivalent expansion velocity is where c is the speed of light.Additionally, we determined the full width at half maximum (FWHM) of the lines by fitting a Gaussian profile to each detected transition.For RW Boo, the 12 CO (J=2-1) and 12 CO (J=1-0) lines exhibited a complex structure.In this case, the expansion velocities were estimated by combining two shell profiles: the primary component represented a two-horn profile, while the secondary component corresponded to a parabolic profile. The line parameters derived as just described are listed in Table A.1.The distribution of expansion velocities for our targets is depicted in Figure 6.The majority of the sources exhibit moderate expansion velocities (V exp ∼ 10 km s −1 ), which aligns with typical values observed in AGB stars.A few sources display narrower line profiles, indicating lower expansion velocities (V exp ≲ 5 km s −1 ).These specific sources include R Uma, RZ Uma, Z Cnc (identified only in the 12 CO (J=2-1) transition), and Y Crb (identified in both lines). CO detection statistics Regarding the variability type, in terms of CO detection fractions, we observed the following trends: 83% for Miras, 56% for SRs, and 14% for LB.These ratios are slightly lower compared to previous CO surveys of AGBs, where detection rates were around 90% for Miras, 70% for SRs, and 50% for LB (see e.g.Margulis et al. 1990;Kerschbaum & Olofsson 1999).This effect, which is particularly notable for LB stars, could suggest that stars with UV excess have a lower likelihood of having CO-rich circumstellar envelopes compared to normal AGB stars.However, the relative order of detection ratios among the variability classes remains consistent. The CO detection statistics is 50% (13/26) among O-rich stars and 100% (2/2) among C-rich stars.Since our sample is strongly biased towards O-rich stars, which represent 26 (90%) of the targets, the CO detection statistics depending on chemistry cannot be considered meaningful, although it does go in the same direction as in many previous works finding a larger CO detection fraction among C-rich AGB stars than in other chemistry types (e.g.Knapp & Morris 1985). Figure 5 illustrates the detectability of 12 CO in our sample, highlighting the comparison between different parameters to identify any discernible trends.Figure 5 (a) shows that CO detections are not strongly biased towards the nearest targets, but spread over a wide range of distances between 300 and 1000 pc.In fact, there is a significant number of CO non-detections at distances lower than 300 pc. Figure 5 (b) shows that the O-rich AGB stars of our sample located in regions I and VII were not detected, this is good agreement with the location of AGBs with thinnest CSEs in regions I or close to it (see van der Veen & Habing 1988), whereas most of the targets located in region II and all the targets located in region IIIa were detected.On the other hand, the C-rich AGB stars of our sample were detected (VY Uma and T Dra, which are located in regions VIa and VII respectively). In Sect.7.1, we specifically examine the correlation between CO and IRAS flux.As anticipated, we observe a proportional relationship between these fluxes, consistent with most of the sources with CO detections being characterised by high IRAS brightness (F 60 > 5 Jy). with the exception of Y Crb, which has F 60 ∼ 1.8 Jy. Since the main difference between our sources and most AGB stars is their UV excesses, we have analysed the distributions of NUV and FUV magnitudes in relation with the CO detection. Figs. 5 (c) and (d) show the distributions of the NUV and FUV time-averaged magnitudes (M NUV and M FUV respectively) without reddening correction for the CO detected and non-detected sources.It is apparent that sources with CO detections have statistically higher UV magnitudes (lower UV fluxes) than those without CO detections.This difference is expected due to the fact that UV extinction is higher in envelopes with higher densities, we studied this anti-correlation with more detail in Sect.6. Figure 5 (e) illustrates the comparison between FUV and NUV luminosities (L FUV and L NUV , respectively) without reddening correction for CO detections and non-detections.The relationship between the two, NUV and FUV, luminosities appears to be proportional, consistent with previous studies (see fig. 4 of Ortiz et al. 2019).However, some sources observed at different epochs exhibit variations that deviate from this correlation.These variations result in significant changes in the R FUV/NUV ratio, with some sources showing an increase in one flux and a decrease in the other.Moreover, the sources with highest GALEX luminosities (i.e BD Cam and Y Gem) also show a high R FUV/NUV ratio (R FUV/NUV > 0.2), whereas our entire sample cover a range R FUV/NUV ≃ 0.06-1.0.Without ignoring these considerations, the distribution does not reveal a clear trend between CO detections and the FUV/NUV ratio.However, it is evident that CO detections are predominantly found among sources Fig. 3: IRAM-30 m spectra of the 15 sources detected in the 12 CO (J=2-1) transition (velocity resolution is δv=1.6 km s −1 ).Line profile fits are shown in red (see Sect. 4). with lower UV excess, as already deduced from Figs 5 (c) and (d). Analysis: CO population diagrams The main goal of this work is to characterise the molecular envelopes of uvAGB stars (i.e.AGB binary candidates) as a class.In this section we present our data analysis methodology to constrain the most relevant envelope parameters, including the massloss rate of the AGB wind, and the results obtained. The CO rotational lines provide a fundamental diagnostic of the physical conditions of the molecular gas.As a first approximation, we have applied the classical population (or rotational) diagram method, explained in Goldsmith & Langer (1999), to estimate the beam-average excitation temperature (T ex ) and the 12 CO column density (N CO ) of the CSEs layers where the 12 CO (J=1-0) and 12 CO (J=2-1) transitions observed are predominantly produced.This method has been successfully used in the analysis of the molecular emission from the envelopes of many evolved stars (e.g.Ramos-Medina et al. 2018;Justtanont et al. 2000). The population diagram method relies on two main hypothesis: (i) optically thin emission and (ii) local thermodynamic equilibrium (LTE) conditions.As explained in Goldsmith & Langer (1999), under these two conditions, the column density of CO molecules in the upper state (N u ) and its energy above the ground state (E u ) are related by the following equation: (3) Here g u is the degeneracy of the upper level, Z is the partition function of the molecule, k B the Boltzmann constant, ν is the rest frequency of the transition, S ul is the line strength and µ is the permanent dipole moment of the molecule, N is the column density of the molecule (in our case N CO ), W = T * A dv is the integrated area of each spectral line, and ∆Ω a and ∆Ω s are the an- tenna and the source solid angle, respectively.The beam filling factor, defined as the ratio (∆Ω a /∆Ω s ), has been estimated as- suming a Gaussian distribution for both the antenna beam pattern and the source brightness distribution.Eq. 3 also incorporates the so-called opacity correction factor defined by Goldsmith & Langer (1999) as where τ is the line peak optical depth.As discussed by these and other authors, this correction is valid as long as the lines are not very opaque (τ ≲ 1), which is the case of our sources (see below in this section). According to Eq. 3, a straight-line fit to the points in the population diagram provides T ex from the slope of the fit and N from the y-axis intercept.The total CO column density has been converted to the total (H 2 ) mass in the CO-emitting volume for our targets in a simplified way as where m H 2 = 3.32 × 10 −27 kg is the mass of the H 2 molecule and X CO = N CO /N tot is the 12 ).Assuming constant expansion velocity and using the values of the total emitting mass, we calculate the mass-loss rates ( Ṁ) for each source in a simplified manner dividing the total mass by the crossing time of the CO-emitting layers, which is computed as the ratio between the characteristic radius of the CO envelope and the expansion velocity (t exp = R s /V exp ). The extent of the 12 CO (J=1-0) and 12 CO (J=2-1) emitting region in our targets is unknown a priori; however, it is a critical parameter to constrain the main envelope properties (T ex and N) using the population diagram technique described above.In this work, we have obtained a self-consistent estimate of the mass-loss rate and of the characteristic radius (R s ), considering that, as empirically demonstrated by Ramstedt et al. (2020), R s is approximately 2-3 times smaller than the CO photodissociation radius (R CO ) as obtained following the formulation by Mamon et al. (1988).We follow an iterative population diagram fitting process, starting by adopting a first value of R s as input to derive a first guess of Ṁ (computed as described above).This value of Ṁ is then used to estimate R CO , which is given by following the formulation by Mamon et al. (1988) (see also Planesas et al. 1990).This new value of R CO is used to compute a new value of R s , which is then used as a new input value to derive again T ex , N, and Ṁ.This process is repeated until the input and output R s converge to the same value. The population diagrams of our sources, with the corresponding linear fits obtained once the iterative process we have just described has converged, are plotted in appendix E. The values of the different envelope parameters derived are given in Table 4 and presented and discussed in the next section.For sources where only the 12 CO (J=2-1) transition is detected, the column density has been estimated assuming an excitation temperature of 10 K, which is similar to the mean value of T ex derived for targets with detections in both 12 CO (J=2-1) and 12 CO (J=1-0).However, in the case of RR Eri, the 12 CO (J=2-1) / 12 CO (J=1-0) line ratio was not consistent with T ex =10 K but indicated a higher value of T ex =25 K that has then been adopted. From our analysis, we found optical depth values τ << 1 for all sources except for T Dra, VY Uma and Y Crb, for which the CO peak-line optical depth takes values close to one for the 12 CO (J=2-1) line.In these cases, the corresponding opacitycorrection factor applied to the envelope mass and, thus, to the mass-loss rate is still moderate, of ∼ 1.6. We conducted a comparison between our estimated massloss rates ( Ṁ) and those previously reported in the literature ( Ṁlit ) for those sources for which this is possible (a total of 11 sources).Some of these prior estimations employed more detailed analysis, such as radiative transfer models.By examining this comparison, documented in Appendix D, we found that the Ṁ values presented in this paper exhibit good agreement with those from previous studies.Therefore, despite relying on an approximate method, the population diagram yields relatively accurate results.This can be attributed to the effetive thermalisation or near-thermalisation of CO molecules in the CSEs, along with the low to moderate CO line opacities in our targets (for a more extensive discussion on this matter see Ramos-Medina et al. 2018).We estimate that our Ṁ values are accurate within a factor of approximately 3-4. Considering the typical values of the radius of the COemitting layers found for our targets and the corresponding distances, we estimate angular diamaters ranging from approximately 1. ′′ 2 to 10 ′′ , that is, smaller than the telescope beam (∼22 ′′ and 11 ′′ at 3 and 1 mm, respectively).Only T Dra and SV Peg, which possess larger sizes compared to the other sources, may exhibit partial resolution within the telescope beam. Properties of the circumstellar envelopes In this section, we will discuss the primary characteristics of the molecular envelopes surrounding our sample of uvAGB stars, which have been inferred from our detections of CO line emissions (Table 4).Furthermore, we will compare the obtained values with those derived from various AGB samples mentioned in the existing literature.In the case of RW Boo, we present the solution for the average V exp as well as two additional solutions corresponding to each of the two V exp components found in the spectral lines -these latter two solutions are presented for illustrative purposes, but were not taken into account in the posterior analysis. Figure 6 presents the distributions of the key envelope parameters obtained in Sect.5, specifically the expansion velocity (V exp ), mass (M), characteristic radius of the CO-emitting volume (R s ), and mass-loss rate ( Ṁ). Expansion velocity The distribution of expansion velocities in our sample exhibits a relatively even spread, encompassing values ranging from approximately 3 to 13 km/s (Fig. 6, left).These values fall within the range of expansion velocities observed in AGB envelopes, although they tend to occupy the lower part of the population, which typically reaches up to ∼30 km/s according to previous studies (Höfner & Olofsson 2018).This outcome can be attributed to the composition of our sample, which primarily comprises O-rich AGB stars (90%) alongside a substantial proportion of irregular and semi-regular AGB stars (79%).It has been shown that O-rich have lower expansion velocities than C-rich AGBs (e.g.Margulis et al. 1990;Höfner & Olofsson 2018).In addition, it is widely recognised that irregulars and semiregulars exhibit systematically lower terminal velocities compared to Mira-type variables (see Olofsson et al. 2002). The lowest expansion velocities observed in this study are those of the O-rich semi-regulars Y Crb and Z Cnc (V exp =3.5 and 3.9 km s −1 respectively).According to Winters et al. (2003) the mass-loss mechanism in low outflow-velocity (V exp <5 km s −1 ) AGB stars is fundamentally dominated by pulsations without the dust radiation pressure playing a dominant role, as opposed to the majority of AGBs.Another possibility is that the narrow profiles observed in some of our sources may be associated with stable, rotating structures in a circumbinary configuration, similar to what has been observed in L 2 Pup (see Kervella et al. 2019). At the high end of our velocity distribution, with V exp ∼12 km s −1 , we find the C-rich Mira T Dra and the O-rich semi-regulars RR Eri and RW Boo.For RW Boo a double-shell profile was found (see Sect. 4.1) indicating the presence of a slow (V exp ∼8.0 km s −1 ) and a fast (V exp ∼16.6 km s −1 ) wind components, in this analysis we have considered the parameters estimated with an average expansion velocity (V exp =12.3 km s −1 ).Finally, within the sensitivity limit of our observations, we do not identify in any of the uvAGB stars in our sample the presence of massive, fast (up to a few hundred km s −1 ) molecular outflows like those commonly present in the subsequent evolutionary stages of pre-PN and PN (e.g.Bujarrabal et al. 2001;Sánchez Contreras & Sahai 2012). Size and mass The characteristic radius of the envelope layers where the 12 CO (J=1-0) and 12 CO (J=2-1) emission observed is predominantly produced, which has been self-consistently estimated together with the mass-loss rate from our population diagram analysis (Sect.5), takes values ranging from 6×10 15 cm to 2×10 17 cm with a mean around 2×10 16 cm.This is in good agreement with values obtained from previous works using similar empirical relationships or model-based indirect estimations (e.g.Groenewegen 2017) but also from direct measurements from interferometric CO mapping (e.g.Ramstedt et al. 2020) for other samples of AGB CSEs with similar mass-loss rates.The derived sizes of the envelopes and their corresponding expansion velocities indicate relatively short crossing or kinematical times, ranging from ∼200 to ∼2000 years (Table 4).Consequently, CO millimetre-wavelength observations are not sensitive to the mass-loss process that occurred more than a few thousand years ago.This finding aligns with the relatively low values of the envelope mass discovered in our study, ranging from 10 −5 to 5 × 10 −3 M ⊙ , with a peak around 3 × 10 −4 M ⊙ .Our analysis reveals a slightly higher mass distribution compared to the results reported by Ramos-Medina et al. (2018) and da Silva Santos et al. ( 2019) based on high-J CO emission observations using the Herschel telescope.This discrepancy is consistent with the fact that far-infrared observations probe a smaller volume, closer to the central regions, compared to the millimetre-wavelength observations. Mass-loss rate The distribution of Ṁ of our sample covers a range from 6×10 −8 to 3×10 −6 M ⊙ yr −1 with a mean value of 6×10 −7 M ⊙ yr −1 .The mass-loss rates obtained in our study fall within the range of values reported in previous investigations of large samples of AGB stars (see Höfner & Olofsson 2018).However, it is worth noting that our sample lacks objects exhibiting extremely high or extremely low mass-loss rates, specifically those reaching up to few×10 −4 M ⊙ yr −1 or falling below a few×10 −8 M ⊙ yr −1 .The uvAGB stars in our sample with the lowest and highest mass-loss rates are Z Cnc and T Dra, respectively. For AGB stars there are well-known correlations between some of the fundamental parameters of their envelopes, in particular, between Ṁ and V exp and between these and the stellar pulsation period (P) for regular and semi-irregular variables.In Fig. 7 we explore those same relationships to see if they are followed in the same way for our sample of uvAGBs.In this comparison, we use the sample of AGB stars recompiled by Van de Sande et al. (2021).Pulsation periods for all targets (AGBs and uvAGBs) are obtained from General catalogue variable stars (GCVS).Vassiliadis (-13.5+0.056P)Fig. 7: Comparison between some fundamental parameters of the AGBs envelopes ( Ṁ, V exp and P) and with well-known correlations found in the literature.Upper: Relationship between Ṁ and V exp .Centre: Relationship between Ṁ and P. Lower: Relationship between V exp and P. The colours of the markers represent the variability type.Red: semi-irregulars (SRs), green: irregulars (LB), and blue: Miras (M).The dashed line represents the linear fit performed over our sample and the solid lines represent the linear relationships for Ṁ and V exp with P found for Mira AGBs from Vassiliadis & Wood (1993).Grey markers represent AGB stars with Ṁ and V exp values recompiled by Van de Sande et al. (2021).The shape of the markers represents the chemistry type in the two samples: circles for O-rich AGB stars and crosses for C-rich AGB stars. The mass-loss mechanism primarily manifests itself through two key parameters: the mass-loss rate ( Ṁ) and the expansion velocity (V exp ).Extensive studies on large samples of AGB stars have consistently demonstrated a correlation between these parameters of the type Ṁ ∝ V exp α with values of α=1.4-3.3 depending on chemistry and variability type of the stars (see e.g.Young 1995;Knapp et al. 1998;Olofsson et al. 2002).In our specific study of uvAGBs, we obtained a very similar result (Fig. 7, top) showing a weak relationship Ṁ ≃ 10 −8 V exp 1.8 .However, it is important to note that our sample size is relatively smaller compared to previous studies, which may partially explain the relatively low Pearson's correlation coefficient found, r=0.65.In line with previous findings by Olofsson et al. (2002), a weaker correlation between Ṁ and V exp was observed for O-rich AGB stars when compared to their C-rich counterparts.The spread in the Ṁ versus V exp trend found in previous studies of AGB stars is in any case large, which probably reflects dust-to-gas ratio variations among other factors (Netzer & Elitzur 1993;Habing et al. 1994). In the case of regular variable stars, such as Mira-type stars, it is widely recognised that a correlation exists between both Ṁ and V exp and the stellar pulsation period, P (e.g.Vassiliadis & Wood 1993).The uvAGB stars in our sample adhere closely to this relationship, as depicted in Fig. 7 (middle and lower panels).The semi-regular uvAGB stars, however, tend to cluster in the low-period region (P<200 days), exhibiting mass-loss rates (and expansion velocities) in the range Ṁ∼10 −7 -10 −6 M ⊙ yr −1 (V exp ∼5-15 km s −1 ), which is greater than would be expected based on their relatively short periods if they were Mira variables.Notably, no discernible correlation with the period is observed for these semi-regular uvAGB stars, which aligns with findings from larger semi-regular AGB samples (Van de Sande et al. 2021).In these cases, it suggests that there is another influential factor, beyond pulsation, playing a significant role in governing the mass-loss process of these objects. Mass outflow momentum and Ṁ-L bol relationship In our study of uvAGBs, we have examined the ratio β= Ṁ V exp c/L bol , which compares the momentum rate in the mass outflow ( Ṁ V exp ) to the momentum rate in the stellar radiation (L bol /c).This parameter, also known as the overpressure or radiation pressure efficiency, is directly proportional to the dust opacity of the wind if radiation pressure on dust grains is the main wind driving force (Knapp 1986;Lefevre 1989). Analysing the distribution of beta values for our sample (Fig. 8, upper panel), we consistently find values well below 1 for all the sources, aligning well with the values observed in the majority of AGB stars and with the values expected if radiation pressure is the principal operative mechanism.As pointed out by Knapp (1986), the β values much lower than 1 (found in our sample but also in some AGB stars) indeed suggests that these objects are not losing mass as efficiently as it is possible.It is worth mentioning also that while some AGB stars may exhibit β values slightly higher than 1, which can be attributed to multiple scattering (Lefevre 1989), none of the objects in our sample demonstrate such behaviour.The sources with the largest values of β∼0.1-0.2 are T Dra and RW Boo.In the case of the progeny of AGB stars (i.e.post-AGBs or pre-PNe), it is frequently observed that the β values are notably high, which is considered as an indication that a distinct mass-loss mechanism, one that differs from radiation pressure acting on dust, is occurring in these stages (Bujarrabal et al. 2001). Stellar pulsation is the other mechanism determining the mass-loss rate.We find a rough proportionality between β and the pulsation period P for our sample (Fig. 8, central panel), in agreement with previous results for AGB stars (Knapp 1986). The observed trend reflects an underlying correlation between Ṁ and P: the mass-loss rate increases with longer periods and consequently, the dust opacity also increases.We find that Mira variables and semi-regulars occupy two distinct regions in the Pβ diagram.The SRs are located in the lower part, reflecting that despite their shorter periods, they exhibit similar mass-loss rates compared to Mira variables as shown in Fig. 7 (middle panel). In the last panel of Figure 8, we conducted a comparison between Ṁ and L bol for our sample.All our targets fall below the single-scattering limit, represented by the solid line, as already discussed and indicated by the low values of β observed.We observed a correlation that can be described by the power-law relation Ṁ ∼ 3 × 10 −12 L bol 1.4 , represented by the dashed line in the figure .A power-law relationship is in line with expectations and is commonly observed in AGB stars.It indicates that as the luminosity increases, either due to larger stellar masses or a more advanced stage of evolution on the AGB, the mass-loss rate tends to increase as well (Höfner & Olofsson 2018;Groenewegen & Sloan 2018).We conducted a comparison between our values of Ṁ and L bol with those reported by Danilovich et al. (2015), who estimated mass-loss rates from radiative transfer modelling of CO line emission in a sample of 53 AGB stars.Upon initial inspection, there is an apparent discrepancy in the exponent of the power-law relationship between the two samples.However, when considering the majority of sources in both studies, there is a general agreement and similarity in the distribution of points.The only notable exceptions are three objects (RT Cnc, RU Her, and, perhaps R LMi) located at the extreme high and low ends of the luminosity range.If we exclude these objects from the analysis, the slopes or power-law exponents of the correlations in both studies match more closely.This indicates that, overall, there is consistency between the mass-loss rate and luminosity relationships derived from our study and that of Danilovich et al. (2015), with the exception of a few outliers. CO emission and CO-derived envelope parameters correlations with IR/UV properties In this section we investigate correlations between the CO emission intensity and the infrared and ultraviolet continuum emission, as well as the bolometric luminosity.We also explore trends between the primary envelope parameters derived from CO and the distinctive ultraviolet emission properties exhibited by our targets. CO versus IRAS 60µm emission The correlation between CO line intensity and the IRAS 60µm emission has been well established for low-to-intermediate mass evolved stars, including AGB and post-AGB stars, as documented in numerous previous studies (Nyman et al. 1992;Bujarrabal et al. 1992;Olofsson et al. 1993, among others).This correlation is attributed to the fact that both the CO emission and the IRAS 60µm emission are indicative of the amount of material (in the form of gas and dust, respectively) present in their envelopes. In our study of uvAGB stars, we also observe a strong correlation between the velocity-integrated luminosity of the 12 CO (J=2-1) transition (L CO ) and the IRAS 60µm luminosity (L 60µm ) as shown in Fig. 9 (upper panel).This finding supports the notion that the gas and dust mass-loss rates are proportional to each other.Given that the infrared radiation is a significant or dominant component of the energy emitted by the dusty envelopes around AGB stars, it is reasonable to expect a correlation between the CO luminosity and the bolometric luminosity, which is in fact observed in our sample of uvAGBs (Fig. 9, central panel). A linear fit to the L CO versus L 60µm and L bol data points yields We note that most of the low-luminosity (L bol <3000 L ⊙ ) sources are CO non-detections (triangles in Fig. 9).Some of these low-luminosity targets could be RGB or early-AGB stars, with mass-loss rates still well below the maximum values reached at the tip of the AGB phase (see e.g.Höfner & Olofsson 2018).However, the large uncertainties in their luminosities (see Sect. 2) does not allow us to robustly assert that these are not AGB stars.Nevertheless, they do not affect the posterior analysis as the CO non-detections were not used in the fitting; the derived upper limits on their CO fluxes are quite consistent with various relationships obtained for the detected targets discussed in this section (see e.g. the relationships between CO and IRAS 60µm/bolometric luminosities shown in Fig. 9). In the literature, there are different approaches used by several authors to compare the relationship between the CO intensity and the IRAS 60µm emission.In principle, we focused on using the luminosity as a reference magnitude and specifically examined the 12 CO (J=2-1) transition due to its higher number of detections compared to the 12 CO (J=1-0)transition.However, a common method employed in exploring the CO-F 60 emission relationship is by comparing the main-beam brightness temperature (T MB in K) of the 12 CO (J=1-0) transition with the IRAS 60µm flux (in Jy).In Fig. 9, lower panel, we show the distribution of the uvAGBs in this study using this same representation together with the relationships derived for different samples of AGB stars in the past. Specifically, we refer to the relationships presented in Nyman et al. (1992) for AGBs in different regions of the two colour IRAS diagram (the same regions are shown in Fig 2 ), including O-rich sources (located into regions II and III) and C-rich sources (located into regions VIa and VII).To account for the difference in beam size between the SEST 15m telescope, used as a reference in the studies of Nyman et al. (1992), and the IRAM 30m antenna, used in this work, we multiplied their relationships by a factor of 4 for a proper comparison with ours.Furthermore, we present the relationship described by Bujarrabal et al. (1992) for PPNe, which shows that PPNe are positioned noticeably below the relationships observed for AGBs.Bujarrabal et al. (1992) interpreted this as an indication that PPNe generally have a lower gas-to-dust mass ratio.This could be attributed to significant CO photodissociation in more advanced stages of evolution beyond the AGB phase, in which the envelopes become more diluted, and the central stars become hotter. We found a relationship I CO ∼ 0.27F 60 for our sample of 10 targets with 1-0 detections (2 C-rich and 8 O-rich).The only two C-rich uvAGBs in our sample lie very close to the expectations based on the CO-F 60 relationship reported by Nyman et al. (1992) for this same chemical type.The O-rich uvAGBs in our sample, however, seem to fall below the relationship for O-rich 1 and 2).In the upper and middle panels L CO is compared with L 60µm and L bol . In the lower panel the 12 CO (J=1-0) main-beam brightness temperature (T MB ) is plotted against the IRAS 60µm flux after properly scaling T MB when observed with a telescope other than the IRAM-30m antenna (see 7.1).The dashed line is the best linear fit to our data; the dotted lines are linear correlations from Nyman et al. (1992) and Bujarrabal et al. (1992) for O-rich AGBs (orange), C-rich AGBs (purple), and pre-PNe (cyan).The colours and shapes of the markers are the same as in Fig. 7. AGBs (Nyman et al. 1992) and much closer to the relationship for PPNe (Bujarrabal et al. 1992).Since our uvAGBs are characterised by an excess of emission in the UV, this result does not seem overly surprising and could indicate, as in the case of PPNs, that a non-negligible fraction of the molecular gas is dissociated by the central source.We further discuss this in Section 8.1 in connection with the CO-to-UV relationship. CO versus NUV and FUV emission A potential trend between the CO emission and the distinctive UV properties of uvAGB stars has not been explored before.For our sample, L CO is compared with the extinction-corrected luminosities in the NUV and FUV bands (L NUV and L FUV ; see Fig. 10, left panels).As previously discussed in Sect.4.2, the CO non-detections (indicated by triangles) tend to cluster towards the higher range of NUV and FUV luminosities, suggesting a potential inverse relationship between CO and UV intensity.Establishing a definitive trend is challenging, however, since the CO-NUV and -FUV diagrams exhibit a considerable spread, the number of CO detections is relatively small or moderate (15 sources) and (unlike CO) the UV emission is notably variable. To address the limitation of a relatively small number of CO detections, and considering the established proportionality between CO and IRAS 60µm emission (discussed in Sect. 7.1), we also examine the relationship between UV and IRAS 60µm luminosity (Fig. 10, middle panels).In these diagrams, the presence of an anti-correlation appears to emerge more clearly but only for the NUV band, for which we derive a Pearson's correlation coefficient of r=−0.48.For the FUV band, however, the anti-correlation is not confirmed (r=−0.01)as the CO intensities exhibit a significant spread (on the order of dex∼2-3) for very similar values of L FUV . In the right panels of Figure 10, we examine the relationship between the total bolometric luminosity and the NUV and FUV luminosity for our target stars.We tentatively observe a weak anti-correlation (r=−0.30) in the NUV band, which is likely influenced by the underlying L 60µm -to-L NUV relationship.However, in the FUV band, no clear trend is evident.This observation aligns well with the results reported by Montez et al. (2017) in a comprehensive study of GALEX uvAGBs, where no significant correlation was found between FUV band emission and other bands (U BVRI JHK), in contrast to the stronger correlations observed in the NUV flux. In all the diagrams presented and discussed above, we assigned different labels to our uvAGB stars based on their variability class to examine potential trends.However, no significant relationships were observed, except for the expected observation that Mira-type variables generally exhibit stronger CO emission compared to semi-regulars and irregulars, which is consistent with broader studies of AGB stars. Finally, it is worth noting that the sources T Dra, RU Her and Y Gem stand out as outliers in all these diagrams, deviating from the general pattern observed in the sample.T Dra exhibits exceptionally intense CO and IRAS 60µm emission compared to the rest of the sample, yet its NUV and FUV emission falls within the average range.Similarly, RU Her shows the second more intense CO and IRAS 60µm emissions as well as the largest L bol , it also shows a large NUV flux despite the fact that it was not detected in the FUV band.In contrast, Y Gem is the strongest UV emitter in the sample, displaying also significant UV variations, and at the same time it shows relatively high values of L 60µm (Even though, a weak 12 CO (J=2-1) emission line was reported by Sahai et al. 2011).Remarkably, T Dra and Y Gem share an additional characteristic, they are both X-ray emitters.This X-ray emission perhaps suggests a more extreme nature for these uvAGB stars (potentially connected with strong binary in-teractions), warranting further investigation into the underlying mechanisms driving their distinct behaviour. Envelope parameters versus NUV and FUV emission In the investigation of potential trends between the two main envelope parameters derived from the CO population diagram analysis, namely the excitation temperature (T ex ) and the mass in the CO-emitting volume (M), no clear correlation or trend was observed in any of the cases with any of the UV bands, as shown in the left and middle panels of Figure 11.However, it is important to consider two important factors that could make it challenging to identify a correlation even if it exists.Firstly, the small number of sources, particularly those with two detected lines, which provide more reliable values of T ex and M. In this case, only 10 sources met the criteria, limiting the statistical power of the analysis.Secondly, the variability in the NUV and FUV bands introduces additional dispersion on the x-axis of the diagrams.This variability can obscure any underlying correlation between the envelope parameters and the UV bands. There is an additional factor that introduces complexity in discerning a clear trend of the UV emission with the envelope temperature, which is the lower limit nature of T ex derived from the CO population diagram analysis.This is because some regions in the outer layers of the envelope, where the low-J CO transition emission originates, may reach densities close to, but slightly below, the critical densities of the CO 1-0 and 2-1 transitions (i.e.≲(1-5)×10 4 cm −3 ).Consequently, the derived T ex values may underestimate the true T kin of the gas. Possible mild deviations from LTE are not anticipated to strongly impact the mass estimation.This has been discussed in detail by Ramos-Medina et al. (2018) and da Silva Santos et al. (2019).The reason for this is that even though T ex may deviate from the true T kin , it precisely represents the level population distribution.As a result, the computation of the total number of emitting molecules (and hence, the mass) remains robust since it involves summing up the populations of all levels. In our study, we also investigated the relationship between extinction-corrected GALEX magnitudes (M NUV and M FUV ) and Ṁ/V exp as a density proxy.We did not observe a confirmation of the linear anti-correlation depicted in Figure 10 of Montez et al. (2017) in the NUV band.However, it is important to note some differences between our study and that by Montez et al. (2017).First, we included GALEX UV photometry for different epochs when available.Second, we dereddened the GALEX magnitudes using distance-corrected values of the total extinction (interstellar and circumstellar, see tables 2 and 4), instead of relying on the default extinction estimate included in the GALEX catalogue, which accounts for the interstellar extinction across the whole Galaxy in the direction of a given target.Third, in the study by Montez et al. (2017), the mass-loss rates and absolute magnitudes were estimated using distances from the Hipparcos catalogue, whereas in our current paper, we utilised distances from the Gaia catalogue.It is important to mention that there is a discrepancy between the Gaia and Hipparcos distances for ∼30% of the sources in the Montez et al. sample, with a factor of 2 difference, and this percentage increases to ∼65% when considering a factor of 1.3.A recent study by Scicluna et al. (2022) has demonstrated that Hipparcos distances are particularly unreliable for distances greater than ∼200 pc.In addition, the uncertainties in the distances in Gaia and Hipparcos catalogues might be underestimated (see Sect. 2).Finally, our sample size is smaller and has a narrower range of M NUV values compared to Montez et al. (2017), who compiled data from a larger set of AGB stars.In the FUV band, neither Montez et al. (2017) nor our study found a clear trend.Considering these differences, our results are consistent with those of Montez et al. (2017), suggesting compatibility without indicating an anti-correlation between Ṁ/V exp and M NUV . In previous studies of AGB stars, several authors have observed an anti-correlation between the envelope mass and effective temperature (e.g.Ramos-Medina et al. 2018;da Silva Santos et al. 2019).However, in our investigation of uvAGBs, we did not find conclusive evidence supporting this relationship.This lack of evidence may be attributed to the relatively small number of sources in our sample and the relatively narrow ranges observed for M and T ex . Extinction effects and enhanced CO photodissociation In Sect.7.2, we find that there is an anti-correlation between the CO and IRAS 60µm emission and the UV luminosity in the uvAGBs in our study.Such anti-correlation is expected to exists to some extent primarily due to the fact that UV emission can be more easily detected when the extinction caused by the envelope is low.This anti-correlation is also shown in Fig. 2, where uvAGBS, specially fuvAGBs, are located in the regions correspondent to AGBs with thin envelopes.On the other hand, objects with higher mass-loss rates, and consequently stronger CO and infrared emission, have higher extinction, making UV detec-tion more challenging or even impossible even if an UV excess exists.This natural relationship between mass loss, extinction, and UV emission contributes to the observed anti-correlation between CO/IRAS 60µm and UV in our sample. On the other hand, the presence of excess UV radiation from the central source, regardless of its origin, is anticipated to induce enhanced photodissociation of CO in the inner envelope regions.This increased photodissociation would result in weaker CO emission overall.Another consequence from this is that the relationship between CO and IRAS 60µm emission would exhibit different slopes for AGB stars with and without UV excess, which is indeed a confirmed result from this study as described in Sect.7.1 and shown in Fig. 9. We hypothesise that the relatively weaker relationship observed between CO (or IRAS 60µm) emission and L FUV , compared to L NUV , can be attributed, at least in part, to the lower overall extinction near the FUV wavelength of approximately 1500 Å.In contrast, the NUV bandpass is situated in the vicinity of the UV extinction bump around 2175 Å, where the dust extinction curve shows a relative maximum.This suggests that the NUV band is more sensitive to extinction-related effects than the FUV band.Additionally, it is also very likely that different emission mechanisms contribute in different proportions in the NUV band and in the FUV band.In particular, the FUV band may be dominated by emission associated with the presence of a binary, e.g.accretion as suggested by Sahai et al. (2008), whereas the NUV band may have a greater contribution from intrinsic emission, e.g.chromospheric emission. The lack of correlation between FUV and any of the envelope or stellar properties explored so far in uvAGBs lend support to the notion that the UV excess observed in our stars, with a vast majority exhibiting relatively high FUV-to-NUV flux ratios R FUV/NUV ≳0.06, has an extrinsic origin, meaning that it is not directly linked to the intrinsic fundamental properties of AGB stars, such as their overall luminosity, but to binary-induced physical processes. Proportion of uvAGBs in large samples of AGB stars As part of this study, we conducted a comparison between the CO-derived envelope properties and emission characteristics of our sample of uvAGBs and those from AGB stars from larger samples found in the literature.Our goal was to identify any unique or distinguishing features that could differentiate these two groups.This comparison is not straightforward, since the larger samples of AGB stars surely also include UV-excess AGB stars that had not been identified as such.These unidentified sources add an additional layer of complexity to the analysis, making it challenging to isolate and differentiate the specific properties of uvAGBs. In this section, we aim to provide constraints on the proportion of uvAGB stars within the larger population of AGB stars.The catalogue of GALEX UV emission from AGB stars by Montez et al. (2017), consisting of 316 AGB stars observed by GALEX, serves as a basis for our analysis.Among these stars, 179 (57%) were detected in the NUV band, while only 38 (12%) were detected in the FUV band.However, it is important to acknowledge that these numbers represent an upper limit to the true proportion of uvAGB stars.This is due to the fact that the original sample of ∼500 well-studied, nearby AGB stars, from which the catalogue was compiled, was initially assembled to search for X-ray emission.As a result, the sample included preselected UV-and/or X-ray emitters, biasing the proportion of uvAGB stars in the catalogue. As part of our analysis, we conducted an independent search for uvAGB stars by cross-matching the Suh (2021) catalogue of AGB stars with the GALEX archive (see Sect. 2).By applying matching criteria similar to those used by Montez et al. (2017), we identified a total of 9838 and 1464 AGB stars that have been observed with GALEX in NUV and FUV bands respectively.Among these, 1019 (10.4%) were detected in the NUV band, and 66 (4.5%) were detected in the FUV band.These results indicate that AGB stars are not frequently detected at UV wavelengths, particularly in the FUV range.However, it is important to consider that the Suh (2021) catalogue includes a substantial number of AGB stars located at large distances (>1000 pc).This introduces an observational bias against detecting uvAGBs, as a significant portion of them would remain undetected due to the substantial interstellar extinction at those distances.For this reason, the low proportion of uvAGBs has to be considered as a lower limit. The discrepancies in the detection rates of nuvAGBs and fu-vAGBs introduce significant uncertainties in estimating the ratios of these UV-emitting AGB stars in the overall population.Additionally, the distribution of galactic AGB stars is skewed towards the galactic centre, resulting in limited sky coverage by GALEX and other UV telescopes in those regions.Conse-quently, a considerable number of UV-emitting AGB stars located in the galactic centre remain unidentified.This spatial bias contributes to a gap in the statistics of UV emissions in AGB stars, preventing a comprehensive understanding of the full population of UV-emitting AGBs. In summary, the proportion of UV-emitting AGB stars (uvAGBs) falls within the range of 10-60% for NUV (nuvAGBs) and 4-12% for FUV (fuvAGBs).These estimates indicate that the fraction of uvAGBs within the AGB population, specially nuvAGBs, is not negligible, which can complicate the identification of genuine uvAGB properties from the broader AGB population. Summary and conclusions We present observations with the IRAM-30 m of the 12 CO (J=1-0) and 12 CO (J=2-1) line emission in a sample of 29 AGB stars with UV excess.Except for RU Her, SV Peg, TU And, and Y Crb, all our targets show an excess in the NUV and in the FUV.The presence of FUV excess in AGB stars is commonly unambiguously attributed to the influence of a binary system, either through the presence of a hot companion or an accretion disk (see Sect. 1).Notably, several stars in our sample exhibit Xray emission as evidence for binarity (see Sect. 2), and all the stars in the sample show proper motion anomalies (see Kervella et al. 2019).Consequently, the uvAGBs in our sample are considered potential AGB binary candidates.Our sample primarily consists of O-rich stars, representing 90% of the sample.There is also a substantial over-representation of semi-irregular variable stars, constituting 55% of the total population.This proportion deviates from that commonly observed in larger samples of AGB stars documented in the literature, where Mira-type variables tend to be more prevalent (see Sect. 2). We detect CO emission in a total of 15 targets, 10 of which exhibit emission in both the 12 CO (J=2-1) and 12 CO (J=1-0) transitions.We observe a trend for the CO non-detections to be associated with the stars with the highest UV excesses. The widths of the observed CO line profiles are indicative of expansion velocities in the range V exp =3-13 km s −1 , within the range for AGB stars.Line profiles vary between flat, doublehorned, triangular, Gaussian, and parabolic, not following too closely the canonical profile shapes expected from the standard CSE model (this is not exclusive of uvAGBs, but a common property of O-rich AGBs).At least in one case, RW Boo, there is clear evidence of a composite line profile indicative of two kinematic components expanding at 8.0 and 16.6 km s −1 , respectively. For CO detections, we performed a population diagram analysis to derive the mean excitation temperature (T ex ) and mass (M) of the CO-emitting volume (see Sect. 5).We also estimated in a self-consistent manner the characteristic radius of the COemitting layers (R s ) and the CO photo-dissociation radius (R CO ) together with the mass-loss rate ( Ṁ). Excitation temperatures T ex ∼5-15 K are prevalent in our targets, except for RR Eri which requires a slightly higher value of T ex ≳25 K.These are probably lower limits to the gas kinetic temperatures in the outer (1-5)×10 16 cm) envelope regions traced by low-J CO transitions.We determined envelope masses 0.3-55 ×10 −4 M ⊙ , which correspond to the amount of mass lost by these objects within the past ∼200-2000 years, considering the deduced envelope size and measured expansion velocities. We find values of the mass-loss rates between 6 × 10 −8 and 3 × 10 −6 M ⊙ yr −1 (i.e.within the range of values reported in previous studies of AGB stars), but there are no objects with extremely high or extremely low mass-loss rates.We explored several relationships between various envelope parameters that are well established for AGB stars.Specifically, we focused on Ṁ, V exp , and the stellar pulsation period (see Sect. 6.3).We find that the uvAGBs in our studies closely adhere to these relationships (Fig. 7).We also estimate the radiation pressure efficiency β= ṀV exp c/L bol and found values below 1 for all our targets, consistent with dust-driven winds.However, the efficiency of mass loss is not as high as theoretically possible as it is also observed in many AGB stars. We investigated correlations between the CO emission intensity and both the infrared emission at 60µm and the distinctive ultraviolet emission of our uvAGBs.We corroborate the existence of proportionality between the CO intensity and the IRAS 60µm emission in our sample.However, we observe a slope that is lower compared to previous studies conducted on O-rich and C-rich AGB stars, and closer to the slope observed in post-AGB objects or pPNe.This suggests that uvAGBs may have a lower gas-to-dust mass ratio compared to other AGB stars.This could be attributed to a higher fraction of atomic gas relative to molecular gas (particularly CO), which may be influenced by the presence of excess high-energy radiation emissions such as NUV, FUV, and in some cases X-rays.As for the COto-UV emission comparison, we find a tentative anti-correlation of CO with the extinction-corrected NUV emission; however, no clear trend is observed with the FUV excess.The FUV excess shows in general very scattered values and lack of correlation with any of the investigated envelope parameters or with the emission at other wavelengths.This would be in good agreement with the idea that FUV emission does not have an intrinsic origin, and it would be an unequivocal sign of binarity.This is the first (and so far only) dedicated CO-based study of uvAGB stars as a class.We find that our sample of uvAGB stars does not exhibit notable discrepancies when compared to the broader category of AGB stars, except for the different COto-IRAS 60µm trend, which is more similar to that found for pre-PNe.In principle, our findings fit well with the dust-driven wind scenario, and there is no need to invoke alternative massloss mechanisms to explain the characteristics of the envelopes around uvAGBs.This conclusion is based on results obtained from single-dish low-J CO line emission observations. Assuming that the uvAGB stars in our sample are bona fide binaries, our findings indicate that the effects of companions on the outer regions of AGB winds, as traced by the J=2-1 and J=1-0 transitions, are subtle, and will require higher sensitivity and higher-dynamic range spatially resolved mapping to be identified (as demonstrated by the challenging identification of arcs and spirals within the faint haloes around certain AGB and post-AGB stars, see Sect. 1).The presence of companions can have more noticeable effects in the inner regions (≲(100-500) au) of the primary AGB star's wind, which can be studied using higher excitation lines, such as higher-J CO transitions. It should also be noted that the AGB samples with which we have compared ours are not only composed of single AGBs, but certainly include (in an unknown proportion) binary AGB stars of any type, including some uvAGBs.This makes it difficult to adequately isolate the unique or distinctive characteristics of the molecular envelopes of uvAGB (binary candidate) stars from this comparison. Finally, we find that there is a larger proportion of irregular and semi-regular variables among AGB stars with FUV+NUV emission than among AGB stars with only NUV emission (see Sect. 2).This raises the possibility that SR and LB variability in AGB stars may be an indicator of binarity and associated ac-cretion activity that enhances FUV emission.However, a study of a larger sample is needed to explore this tentative result more thoroughly, taking into account the observational bias that larger samples will likely cover a larger distance range, making it more difficult to detect and classify variability and to detect UV emission. In addition, the extinction correction can be performed over the observed fluxes following the equation: Finally, the luminosity of each source can be estimated for each frequency following the equation: Added by TeX Support The extinction-corrected bolometric luminosity of each source listed in Table 2 was estimated by numerical integration of its Spectral Energy distribution (SED).The SEDs were built using photometric data at different wavelengths (from 300 nm to 300 µm) using the VizieR Photometry viewer tool available at VizieR database (Ochsenbein et al. 2000).We applied the ISM extinction correction to the SEDs (see values on table 2) adopting A λ /A V ∝ λ −1 and A V = E(B − V)R V , where A λ is the extinction in a certain wavelength and E(B − V) is the redenning, which have been estimated from extinction in the johnson V band with the canonical value R V = 3.1 of the extinction ratio. The NUV and FUV magnitudes and fluxes of our sample are affected by extinction by both dust in the ISM and dust in the circumstellar envelope.A rough estimate of the circumstellar extinction of our targets has been obtained using the total H 2 column density deduced from our CO-based analysis (sect.5) and adopting the canonical conversion ratio N tot /A V = 1.8 × 10 21 mag −1 cm −2 .GALEX fluxes were corrected from both ISM and CSM extinction (see tables 2 and 4) assuming R NUV = 7.81 and R FUV = 6.30(same as Montez et al. 2017). Appendix D: Comparison of values of Ṁ with previous estimates For a subset of our sample of uvAGB stars (11 sources), we have compared our derived values of the mass-loss rate ( Ṁ) obtained from a first-order approximation using the population diagram method with previous CO-based estimates of mass-loss rates available in the literature ( Ṁlit ).For this comparison, aimed at evaluating the uncertainties associated with our simplified approach, we place greater emphasis on a few selected sources for which detailed radiative transfer models of more than one low-J CO transition, typically the 12 CO (J=1-0), 12 CO (J=2-1), and/or 12 CO (J=3-2) emission lines, have been previously conducted.Details on previous CO observations and on the corresponding data analysis and mass-loss rate estimates available for some of our targets are given below.To ensure a meaningful comparison of the mass-loss rates, we have rescaled the previous values by adopting the same distance (D), expansion velocity (V exp ), and fractional CO abundance (X CO ) as used in this study (see tables 1 and 4), taking into account that Ṁ∝D 2 V exp /X CO .RT Cnc Knapp & Morris 1985Bujarrabal et al. 1989Kahane & Jura. 1994Young 1995Knapp et al. 1998Shoier & Olofsson. 2001Oloffson et al. 2002Danilovich et al. 2015Oloffson et al. 1987Neri et al. 1998Loup et al. 1993Kerschbraum et al. 1996Groenewegen et al. 2002Winters et al. 2003 Overall, the obtained values of the mass-loss are in good agreement (within a factor 3) with those from the literature, specially those obtained from radiative transfer models and using more than one CO transitions, which are expected to provide the most reliable and accurate values.In a few cases, larger discrepancies are observed, Fig. 1 : Fig. 1: IRAS 60µm flux distribution of our targets (green) and of AGB stars (including nuvAGBs and fuvAGBs) from Suh (2021), see Sect. 2. The colours represent the NUV/FUV emission; the sample is indicated in the top right corner for reference. Fig. 2 : Fig. 2: IRAS [25]-[60] vs [12]-[25] colour-colour diagram showing the location of our sample of 29 uvAGBs and the AGB stars (with and without UV excess) from the catalogue by Suh (2021), see Sect. 2. The colours of the markers represent the NUV-FUV emission; the sample is indicated in the bottom right corner for reference. Fig. 5 : Fig.5: Comparison between different parameters and the12 CO detectability of our targets.(a) Scale height vs distance for our targets.The solid line represents a viewing angle of ±90 • , the dotted line represents a viewing angle of ±30 • , and the dashed line represents a viewing angle of 0 • .The non-detected in12 CO are shown as red circles, sources detected in12 CO (J=2-1) line are shown as green squares, and sources detected in both12 CO lines are shown as blue stars.(b) IRAS [25]-[60] vs [12]-[25] two-colour diagram restricted to those regions in which our sources are located (i.e.regions I, II, IIIa, VIa, and VII).The colour scheme is the same as (a).(c) Histogram with the distribution of the NUV magnitudes of our sample.The sources detected in 12 CO are shown in white and sources non-detected in 12 CO are shown in grey.(d) Same as (c) for FUV magnitudes.(e) Comparison between NUV and FUV luminosities.The solid line represents the relation R FUV/NUV =1, the dashed line represents the relation R FUV/NUV =0.1, and the dotted line represents the relation R FUV/NUV =0.01.Solid circles correspond to well measured luminosities and empty triangles to upper limits in one of the luminosities; the colour scheme is the same as (a). Fig. 6 : Fig.6: Distribution of the CO envelope expansion velocity (V exp ), mass (M), characteristic radius (R s ), and mass-loss rate ( Ṁ) of the uvAGB with CO detections in our sample. Fig. 8 : Fig.8: Comparison between envelope parameters related with the mass-loss mechanism.Upper: Distribution of the β parameter, defined as the ratio of the outflow to the stellar radiation momentum (β= Ṁ V exp c/L bol , Sect.6.4).Centre: Relationship between stellar pulsation period (P) and β.Lower: Relationship between Ṁ and L bol ; the solid line represents the singlescattering limit in the case V exp =10km s −1 (seeGroenewegen & Sloan 2018) and the dashed line the best linear fit to our dataset.The colours and shapes of the markers are the same as in Fig.7.Grey markers represent AGB stars with Ṁ and L bol values recompiled fromDanilovich et al. (2015) (circles: O-rich AGB stars, squares: S stars, and crosses: C-rich AGB stars). Fig. 9 : Fig. 9: Comparisons of the CO integrated intensity, IRAS 60µm emission, and bolometric luminosity (TablesB.1 and 2).In the upper and middle panels L CO is compared with L 60µm and L bol .In the lower panel the12 CO (J=1-0) main-beam brightness temperature (T MB ) is plotted against the IRAS 60µm flux after properly scaling T MB when observed with a telescope other than the IRAM-30m antenna (see 7.1).The dashed line is the best linear fit to our data; the dotted lines are linear correlations fromNyman et al. (1992) andBujarrabal et al. (1992) for O-rich AGBs (orange), C-rich AGBs (purple), and pre-PNe (cyan).The colours and shapes of the markers are the same as in Fig.7. Fig. 10 : Fig.10: Comparisons between the12 CO (J=2-1) velocity-integrated luminosity (L CO , left), IRAS 60µm luminosity (L 60µm , middle), and bolometric luminosity (L bol ) with the luminosity in the GALEX NUV and FUV bands (top and bottom panels, respectively).The colour-coding used for variability types is shown in the top left panel.Triangles are upper limits.The dashed line represents the best linear fit to the F 60 -to-L NUV data points (Pearson's correlation coefficient r=-0.48).The trends in the rest of the variables represented have not been confirmed ( § 7.2). Fig. 11 : Fig. 11: Comparison of main envelope parameters (T ex , M, and Ṁ/V exp ) and UV properties of our sample of uvAGBs (NUV and FUV; top and bottom panels, respectively). Fig. D. 1 : Fig. D.1: Comparison between mass-loss rates scaled values from the literature ( Ṁlit ) and Ṁ obtained in this work fitted with R s = R CO /2.5.The solid line indicates equality, the dashed lines relationships 1/2 and 2, and the dotted lines relationships 1/3 and 3. Filled circles and empty squares indicate previous studies with radiative transfer models and with empirical laws, respectively. Figure Figure D.1 shows the properly scaled Ṁ-to-Ṁlit comparison.Overall, the obtained values of the mass-loss are in good agreement (within a factor 3) with those from the literature, specially those obtained from radiative transfer models and using more than one CO transitions, which are expected to provide the most reliable and accurate values.In a few cases, larger discrepancies are observed, Table 1 : Astronomical parameters of the sample from Gaia DR3. Table 2 : Observational parameters of the sources. Table 3 : Spectral measurements for detected sources. ): expansion velocity of the shell fit, Col. (8): Full Width at a Half Maximum of the Gaussian fit. Table B . 1: Fluxes used in this study.
18,113.2
2024-01-16T00:00:00.000
[ "Physics" ]
Application of Hyperspectral Technology with Machine Learning for Brix Detection of Pastry Pears Sugar content is an essential indicator for evaluating crisp pear quality and categorization, being used for fruit quality identification and market sales prediction. In this study, we paired a support vector machine (SVM) algorithm with genetic algorithm optimization to reliably estimate the sugar content in crisp pears. We evaluated the spectral data and actual sugar content in crisp pears, then applied three preprocessing methods to the spectral data: standard normal variable transformation (SNV), multivariate scattering correction (MSC), and convolution smoothing (SG). Support vector regression (SVR) models were built using processing approaches. According to the findings, the SVM model preprocessed with convolution smoothing (SG) was the most accurate, with a correlation coefficient 0.0742 higher than that of the raw spectral data. Based on this finding, we used competitive adaptive reweighting (CARS) and the continuous projection algorithm (SPA) to select key representative wavelengths from the spectral data. Finally, we used the retrieved characteristic wavelength data to create a support vector machine model (GASVR) that was genetically tuned. The correlation coefficient of the SG–GASVR model in the prediction set was higher by 0.0321 and the root mean square prediction error (RMSEP) was lower by 0.0267 compared with those of the SG–SVR model. The SG–CARS–GASVR model had the highest correlation coefficient, at 0.8992. In conclusion, the developed SG–CARS–GASVR model provides a reliable method for detecting the sugar content in crisp pear using hyperspectral technology, thereby increasing the accuracy and efficiency of the quality assessment of crisp pear. Introduction Pears are one of the world's most popular fruits [1].Pears, compared to other fruits, have a higher dietary fiber content and can have more favorable effects on the human gastrointestinal tract, making them popular among consumers [2]. The sugar content in pears can influence their flavor, so it is an essential predictor of pear quality.The pear quality control procedure in many countries now involves the detection of sugar content [3].Currently, the quality inspection and classification of most fruits primarily rely on manual examination [4], which is subjective and inefficient [5].Now, the quality features of most fruit can be directly detected, exhibited, and identified using recent advancements in computer vision technology such as RGB and hyperspectral images.Additionally, optical imaging technologies such as spectral imaging are becoming increasingly popular with the development of hyperspectral sensors for the automated detection and nondestructive grading of fruit quality [6].These systems can be used to collect a large amount of digital data related to fruit properties [7].When processing large batches of fruit-grading activities, the fruit quality detection accuracy and detection time of contemporary optical imaging technology are higher and lower, respectively, than those of previous approaches [8].Notably, the application of contemporary optical imaging equipment for nondestructive testing can substantially reduce labor costs while increasing testing efficiency. Hyperspectral imaging techniques can be used to efficiently capture internal fruit quality information, as differences in fruit quality are reflected in differences in waveband and spatial distribution information [9].These methods have performed well in testing the fruit quality, accurately detecting the attributes of several fruits, including glucose content [10], persimmon skin hardness [11], banana water content [12], strawberry skin abrasions [13], and citrus maturity [14].Gamal El Masry et al. employed hyperspectral imaging technology in the visible and near-infrared (400-1000 nm) regions in 2006 to build a model to nondestructively quantify indicators such as the total soluble solid (TSS) content in strawberries [15].In 2016, Jiangbo Li et al. applied long-wave near-infrared hyperspectral imaging technology to evaluate the soluble solid content (SSC) in pears [16].Dongyan Zhang et al. used hyperspectral imaging technology to quantify the sugar content in a specific pear variety (Danshan) in 2018 [17].In 2023, Min Xu et al. used hyperspectral technology with the deep-learning-based stacked autoencoder (SAE) method to construct a deep learning model to quickly detect the TSS in Kyoho grapes.As such, nondestructive testing (NDT) [18] represents a suite of analytical techniques employed for the evaluation of a material's properties without causing damage. Most scholars have chosen to screen features from several bands in hyperspectral data to find characteristic bands that are strongly associated with fruit quality.These methods can efficiently handle some of the features that may be included in the whole spectrum when combined with hyperspectral imaging technology.In these models, various wavelengths can cause a number of issues such as collinearity, redundancy, and noise interference [19].To address these issues, the feature extraction algorithm and the quality of the extracted features can be enhanced to improve the model's prediction performance [20].A variety of variable selection algorithms have been created for feature extraction technology to generate parsimonious models.Nogaard et al. developed interval partial least squares (iPLS), a graph-oriented local modeling approach, which they tested on a near-infrared (NIR) spectral dataset based on 60 beer samples.The spectral correlation coefficient was calculated: the root mean square error was 0.17%, which was four times less than that of the whole spectrum [21].Munera investigated the interior quality of persimmons using hyperspectral imaging technology and predicted hardness using the continuous projection and partial least squares regression models, achieving an Rp2 prediction accuracy of 0.80 and an RMSEP of 3.66 [11].Choi et al. employed typical normal transformation and smooth convolution preprocessing together with the partial least squares regression approach to develop a model using the near-infrared spectroscopy data of pear sugar concentration.The correlation coefficient on the prediction set ranged between 0.90 and 0.96; the root mean square error ranged from 0.29 to 0.33 [22].Although these variable selection algorithms are novel, their algorithm fusion techniques are straightforward.The algorithm is neither tuned nor preprocessed step-by-step, limiting further increases in model detection performance [9]. In this study, we aimed to address the aforementioned issues by developing a method combining feature extraction engineering methodologies.We used a genetic algorithm (GA) to enhance the support vector machine algorithm (SVR), which was then paired with three preprocessing methods: standard normal transformation (SNV), multivariate scattering correction (MSC), and smooth convolution (SG), as well as competitive adaptive reweighting.The competitive adaptive reweighting (CARS) algorithm and the continuous projection algorithm (SPA) were the two feature extraction approaches.Using hyperspectral data, we developed a model for detecting the sugar content in crisp pear, offering both theoretical reference and technological assistance for the grading and nondestructive testing of crispy pear quality.The following are the novel features of this study: (1) A crisp pear sugar content dataset was developed and published using hyperspectral imaging technology; (2) The feasibility and ideal model of the optimized genetic algorithm were investigated for predicting the sugar content in pear. Hyperspectral Technology Spectroscopy is an interdisciplinary study of physics and chemistry, examining the interaction of electromagnetic waves with substances in a spectrum [23].Because the atoms that comprise each substance have distinct spectral lines, spectra can be used to identify substances and determine their chemical compositions [24].This is known as spectral analysis.Hyperspectral imaging technology has the characteristics of both traditional imaging and spectral analysis and can be used to simultaneously obtain the spatial and spectral information of a detected object, as well as to detect physiological characteristics of, for example, fruit, via detecting light absorption, transmission, and reflection [25].For example, hyperspectral imaging technology has been used to identify early fruit rot [26] and estimate strawberry moisture content and maturity [27]. Genetic Algorithm (GA) The evolutionary principles of nature inspired the genetic algorithm (GA).The GA is a search technique used for discovering optimal solutions that is applied in a variety of optimization problems [28], for example, studying multiobjective optimization models for a sustainable agricultural industry structure [29] and the optimization of apple disease segmentation and classification based on strong correlation and feature selection [30].As such, the GA was used in this study to optimize the SVM model generated using the spectral data from crisp pear obtained using hyperspectral technology to accurately identify the sugar content in crisp pear. Support Vector Machine (SVM) A support vector machine (SVM) is a binary generalized linear classifier that uses supervised learning to classify data that performs particularly well when dealing with small sample sizes and in nonlinear and high-dimensional situations [31].An SVM is currently capable of handling multiclassification problems and performing application tasks in agricultural detection owing to extensive research and development; for example, see [32] for the application of a support vector machine in precision agriculture and [33] for its application in the multicategory recognition of maize seedlings/weeds in visible/nearinfrared imagery. Data Production The sample collection area is shown in Figure 1.A total of 168 crisp pear samples were collected in November 2022, in Yucheng District, Ya'an City, Sichuan Province (29.9890 latitude north, 102.9820 longitude east).The samples were consistent in terms of size, surface integrity, and maturity.The experiment began with meticulous wiping and random numbering of each pear sample.Then, each sample was placed in the experimental setting for 24 h to guarantee that the sample's temperature was synchronized with the ambient temperature, establishing the groundwork for later detection work. Figure 1 shows the specific geographical location of the study, as well as the specific origin of the pear samples.The Gaia Sorter hyperspectral sorter, manufactured by Beijing Zhuoli Hanguang Company, was employed to gather data, as illustrated in Figure 2a.The system included a high-resolution CCD camera (1344 × 1024 pixels), a spectrometer (Image-Spectral Image, working wavelength range of 387 nm to 1034 nm, capable of collecting spectral information in 256 wavelength bands), a diffuse reflection light source (primarily a bromine tungsten lamp with a power of 200 W), an electric translation stage, and a computer system.All acquisition activities were performed in a specialized dark box, as illustrated in Figure 2b, to avoid the impact of external light on image acquisition.We employed SpecView Version 3 (V3) software to precisely adjust the parameters of the instrument before collecting hyperspectral images of the samples to ensure the capture of high-quality images.These parameters included exposure time, spectral resolution, and the electronically controlled mobile platform's action parameters.Furthermore, the instrument was warmed for 30 min before use to ensure constant temperature and light intensity during the experiment.Given the potential impact of ambient conditions and the instrument on hyperspectral images, the original images were also subjected to ordinary black-and-white corrective processing [34].Equation (1) shows the adjustment formula: where I is the corrected image, I 0 is the original image, B is the black standard image, and W is the white standard image.We set the camera exposure time to 11 ms, the distance between the camera objective lens and the platform to 190 mm, and the moving platform speeds to 0.5 cm/s and 1 cm/s.Furthermore, the camera's spectral range was 387-1034 nm, with a spectral resolution of 2.8 nm. The spectrometer's imaging method was as follows.First, the spectrometer scanned each row of pixels in the sample to be measured to produce a single row of image and spectral information.Second, the electric translation stage advanced the sample along the predetermined path, successively exposing and imaging the placed CCDs in the longitudinal direction.As such, the comprehensive three-dimensional hyperspectral image data from the sample were acquired when paired with horizontal and vertical imaging information.Each crisp pear sample's spectral data had a fixed 1344 × 1024-pixel resolution and included 256 wavelength bands.Given the equipment noise at both ends of the spectrum range, we selected 237 wavelength data for each sample in the 400-1000 nm range to analyze.Following the processing of these 168 pieces of data, they were separated into a training set (118 samples) and a prediction set (50 samples) in a 7:3 ratio.Figure 2 depicts the raw spectrum data.We revealed three distinct absorption valleys in the 0-100 nm to 200-300 nm wavelength ranges. Figure 3 depicts the raw spectrum data.The spectral data of 168 unprocessed pear samples were collected.A Fanover digital sugar content refractometer was used in this study to determine the physical and chemical sugar contents in crisp pear.The device had a resolution of 0.1% and an accuracy of 0.2% for measuring fruits and vegetables with a sugar content of up to 32%.It performed steadily at ambient temperatures ranging from 10 to 40 °C, allowing for the reliable detection of sugar content in our tests.The sugar content in 168 crisp pear samples was assessed by following the NY/T2637-2014 standard [35].This refractometer was used to test sugar content in Brix units under a constant laboratory temperature of 19 °C.To assure the accuracy of the measurements, the refractometer's measuring window was cleaned with distilled water and dried with special lens cleaning paper before each measurement.Then, we removed the pulp from the equatorial portion of the pear, squeezed off the juice, and placed it on the refractometer's window.Each sample's Brix value was measured three times independently, with the average serving as the final record.The 168 samples were divided into training and test sets in a 7:3 division during the experiment.Table 1 shows the division of the training and test sets prior to the formal processing of the spectral data, as well as the calculation of the parameters.Table 1 shows the division of the training set and test set and the parameter calculation for each part before the formal processing of spectral data.The sugar level in the training set samples ranged from 7.4 to 12.5, as shown in Table 1, and that in the test set samples ranged from 8.4 to 10.5.The test set's standard deviation, 1.52, was less than that in the training set of 2.83, indicating that the test set's data distribution was more concentrated. Data Preprocessing The convolution smoothing (Savitzky-Golay, SG) algorithm is an accurate and efficient method for smoothing spectral data.This algorithm calculates the average value within the smoothing window through weighted least squares fitting, thereby highlighting the importance of the center point [36].This method uses polynomials to perform the least squares fitting of spectral data to achieve data smoothing.This requires selecting a fixedsize window, treating all spectral data within the window as a whole, representing each measurement point x = −m, 1 − m, . . ., 0, 1, . . .m , and using the polynomial formula shown in (2) to complete the fitting: The residual between the original spectrum and the fitted line was calculated, and its minimum value was used as the boundary point to solve the optimal coefficient matrix B = X(X T X) −1 X T .Then, this coefficient matrix was convolved with the spectral data from each sample, accurately smoothing the original spectral data.This not only ensured the integrity of the data but also strengthened the role of the center point in the entire dataset, providing a more accurate and stable basis for subsequent spectral analysis.In addition to the convolution smoothing algorithm, the standard normal transformation (SNV) and multivariate scattering correction (MSC) algorithms are common spectral data preprocessing algorithms.The three data preprocessing effects of standard normal transformation (SNV), multivariate scattering correction (MSC), and convolution smoothing (SG) are shown in Figure 4.The trend of each color band in Figure 4 represents the variation in hyperspectral reflectance response for each experimental sample.Different color bands represent different experimental samples. Feature Selection Method The CARS methodology is based on the Monte Carlo sampling method, in which adaptive reweighted sampling (ARS) is the main technology.The main advantage of this algorithm is that it can effectively use the ARS strategy to select wavelength points with higher absolute regression coefficient values; then, this algorithm screens out the subset corresponding to the minimum error using cross-validation, thereby efficiently identifying the optimal variable combination [37].During the CARS implementation step, some samples are randomly selected from the correction set for modeling, whereas the remaining samples are used as the prediction set.The number of samplings (N) must be determined ahead of time.The program then uses an exponential decay function to exclude wavelength points with fewer weighted regression coefficients.Each round of sampling uses the ARS approach to select wavelengths from the previous round's variable collection.Here, we effectively produced N sets of candidate feature wavelength subsets and their accompanying attributes after N rounds of sampling.Finally, the characteristic wavelength is chosen from the subgroup with the lowest value.Each variable is given a weight by the CARS algorithm.The higher the weight, the larger the variable's contribution to the model, and the probability of being selected proportionately increases.The specifics of this process's computation can be found in Formulas ( 3) and ( 4): Here, X is a spectral matrix with M rows and P columns; T is the score matrix of X; W is a linear combination system of X and T; C is a regression coefficient vector representing the partial least squares model established with T; and e is the error vector.The weight ω is defined as follows: When implementing the CARS algorithm, the number of Monte Carlo samplings N needs to be determined in advance.This algorithm relies on cross-validating each candidate variable subset and comparing their root mean square errors (RMSECVs) when selecting the optimal variable subset.Among these variable subsets, the subset with the smallest RMSECV is selected as the optimal variable subset.Importantly, the CARS algorithm eliminates uninformative or low-information variables by performing two key steps, exponential decay function (EDF) and adaptive reweighted sampling (ARS), during each round of running.Specifically, EDF defines the proportion of variables retained in each run, which is calculated as follows: Under certain conditions, a and k are treated as constants.Specifically, in the first run, the wavelength used for modeling is the full wavelength, so r 1 = 1.By the Nth run, two wavelengths are used for modeling, as shown in Equation (7), where constants a and k are defined in Formulas ( 8) and (9): In the process of wavelength selection, we first use the exponential decay function (EDF) to quickly eliminate those variables with lower weights.Second, using the adaptive reweighted sampling (ARS) method, according to the survival of the fittest principle, from the remaining p × r i , we select a new subset of variables.Third, the cross-validation method is used to calculate the root mean square error (RMSECV) of this new subset, which is used as a benchmark for the next round of iteration.This series of loop iterations is a continuous optimization based on the results of the previous round, aiming to gradually approach the optimal solution.This process not only ensures the stability and accuracy of the model but also increases the efficiency and practicality of the calculations. Principle of Genetic Algorithm The starting population in the genetic algorithm optimization method used in this study was composed of a series of solutions, each of which was represented by a chromosome and reflected a specific set of parameter configurations.The system then followed a sequence of selection, crossover, and mutation operations based on the defined fitness function, with the goal of screening and optimizing the individuals in the population.Individuals with good fitness were retained first, whereas those with low fitness were gradually phased out.This screening and optimization cycle continued until the specified termination conditions were met.Algorithm 1 depicts the detailed method of this algorithm. Input: Problem The genetic algorithm's selection operation step picks excellent individuals from the old population based on a given probability using tactics such as roulette selection, random competition selection, and best retention selection.A certain crossover operator causes partial portions of chromosomes to be transferred between two individuals, resulting in the generation of new chromosomes.The most common crossover methods used for this are single-point, two-point, and uniform crossover.The goal of the mutation operations is to develop individuals with improved performance by changing particular sections of individual chromosomes.The fitness function, as a vital signal for measuring individual performance, is critical in the selection operation and ensures the algorithm's correctness and reliability.The use of the genetic algorithm increases not only the model's convergence speed and effect but also the accuracy of detecting the sugar content in crisp pear. Principle of Support Vector Machine Algorithm The support vector machine's main purpose is finding a decision boundary that maximizes the classification interval, also known as the maximum margin hyperplane.Many hyperplanes, as illustrated in Figure 5, can be used to discriminate between two types of samples, but the ideal hyperplane is the one that minimizes the sum of the distances from all sample locations to the plane.This distance is referred to as the classification interval [31].The derivative model, support vector regression (SVR), is an extension of SVM in regression prediction that has the goal of minimizing the total deviation between the predicted and actual values.The core of SVR lies in selecting an appropriate kernel function.We used the radial-basis kernel function (RBF) to build a prediction model.Differently from the discrete output of classification problems, the output of regression problems is continuous.For example, in this study, RBF was used to represent the predicted value of sugar content in pears.Unlike the classification problem, where the outputs "1" and "2" indicate the intact and bruised pear status, respectively, the output in the regression problem is continuous, which reflected the sugar content in the pears in this study.In addition, when processing spectral data, considering linear and nonlinear conditions, the SVR model adopts different regression function formulas, that is, Formula (10) under linear conditions and Formula (11) under nonlinear conditions, to achieve more accurate predictions.Under nonlinear conditions, kernel functions x and Lagrange multipliers al pha i and al pha * i are all key parameters and require careful adjustment and inspection to ensure the best model performance. In summary, by creatively combining SVM and SVR, we predicted the sugar content in crisp pear, thus providing a reliable technique for assessing the quality of this fruit. Improved Support Vector Machine Based on Genetic Algorithm The parameter choices of the radial-basis function (RBF), particularly the kernel function parameter (g) and penalty factor (C), play an important role in the application of a support vector machine (SVM) [38].The kernel function parameter (g) directly impacts the model's generalization ability: a larger (g) value leads to the model being excessively complex and impairs prediction accuracy for unknown samples.This is typically observed as overfitting.Conversely, lower (g) values may result in the model being undertrained, i.e., underfitting.Similarly, the penalty factor (C) substantially impacts model performance: a larger (C) value limits the tolerance for training errors and raises the risk of overfitting, whereas a lower (C) value may impair the model's general performance, leading to underfitting.As a result, to attain the most accurate SVM prediction performance, these two parameters must be precisely set.For this purpose, a genetic algorithm was used in this study to optimize the SVM parameters g and C and to develop a prediction model that accepts crisp pear sugar content spectral data as the input and crisp pear sugar content prediction value as the output.The model was developed and evaluated in the MATLAB R2023a environment, and the RBF kernel function was used as the SVM kernel parameter.Figure 6 shows the MATLAB implementation process for SVM and its optimization method.In this study, the genetic algorithm's optimization was designed to minimize the error rate of the support vector machine (SVM), with the error rate serving as the fitness function.The procedure started with a population that is randomly generated, with each population member representing a set of hyperparameter values.The error rate for each population member was calculated by training and testing the SVM model, and this error rate was used as the fitness value.We continued to optimize the hyperparameters C and g by selecting the individual with the highest fitness as the parent and establishing a newgeneration population through crossover and mutation procedures.This genetic algorithm iterated until either the set number of iterations or the fitness criterion was met.Finally, the algorithm returned the optimum parameter values [39], which were used to build the SVM model to increase prediction accuracy and dependability [40]. Evaluation Indicators We used the Pearson correlation coefficient (r) squared, root mean square error RMSE (root mean square error), modeling determination coefficient of the training set and prediction set of the prediction model R 2 c , verification coefficient of determination R 2 p , and root mean square error of the correction and prediction sets to evaluate the development model for predicting crisp pear sugar content RMSEC.RMSEP [41] reflects the fitting effect of the predicted values with the true values in the model training and prediction sets.The value of the coefficient of determination ranges between zero and one, where, the closer the value to one, the higher the accuracy of the prediction model and the better the fitting degree.RMSEC and RMSEP reflect the degree of deviation between the predicted values for the samples in the training and prediction sets and the true values.The closer the value is to zero, the smaller the deviation of the model prediction value and the higher the inversion accuracy [42].The calculation formulas are as follows: where n c and n v are the number of samples in the crisp pear correction set and prediction set, respectively; Y t (i) is the true measured value of the ith sample; Y c (i) and Y v (i) are the predicted values of the samples in the crisp pear correction and prediction sets, respectively; and i and Y m and are the correction and prediction sets, respectively, which are the averages of real measurements. Basic Experiments and Settings We first selected two classic basic regression approaches, linear regression and random forest, to further investigate the performance of the proposed optimization model in tests.Both techniques have demonstrated consistent and outstanding performance in a range of settings [43].As a result, we argued that starting from their performance might provide a beneficial reference for later optimization of performance.As a result, we first compared these two fundamental methodologies.The data were randomly divided in a 7:3 ratio, with the test set accounting for 30% and the training set accounting for 70%. Figure 7 depicts the prediction results of the two models.The performance of both baseline methods in processing spectral data was unsatisfactory.The R 2 obtained with the linear regression model was only 0.37, and the RMSE was 0.80.The random forest model achieved slightly more accurate results, with an R 2 of 0.50 and an RMSE of 0.78.These results required improvement.To explore the effectiveness of the proposed SVM algorithm model, we directly modeled the original data using SVM and compared it with RF and LR.The comparison results are shown in Table 2. The experimental findings revealed that the SVM algorithm outperformed the other two algorithms and the traditional approach according to the correlation coefficient on both the test and prediction sets.Although SVM's performance on the training set is not optimal, this does not show the true predictive power of SVM.In the evaluation of the prediction model, more attention should be paid to the model's ability to process unknown data, and the SVM model has significantly better indicators in processing test set data than other methods, so the SVM model has more effective prediction ability than other methods.The LR algorithm obtained an exceptionally low correlation coefficient of zero due to overfitting, which did not imply that LR had high prediction ability.The prediction set's root mean square error intuitively demonstrated SVM's superiority in the prediction process compared with the other two classical techniques.These results indicated the SVM method's capacity to cope with small samples, nonlinearity, and high dimensionality, as well as the success of modeling based on the SVM algorithm in this study.Below, we focus on the preprocessing, feature wavelength extraction, and support vector machine model optimization of the hyperspectral sugar content prediction algorithm in detail. Data Processing Effectiveness Spectral data must be preprocessed before formal modeling to reduce nonsystematic flaws such as instrument noise and dark current [44].We used three techniques to generate support vector regression (SVR) to analyze the influence of several preprocessing methods: standard normal transformation (SNV), multivariate scattering correction (MSC), and convolution smoothing (SG).Table 3 describes the model and its prediction effect.The evaluation indicators in Table 3 show that, compared with the baseline methods shown in Figure 4, the correlation coefficient of the preprocessed data was higher and the root mean square error was smaller.Compared with the baseline methods, substantial performance improvements were achieved after the original data were modeled using SVR (R 2 p = 0.7093, RMSEP = 0.5619).Additionally, when the original data were subjected to the three preprocessing technologies of SNV, MSC, and SG, the effect was further strengthened, the correlation coefficient index was increased, and the root mean square error was reduced.The performance of SG-SVR was particularly notable (R 2 p = 0.7835, RMSEP = 0.4442).We provide a line chart of predicted and true values in Figure 8 to present these results. The SVR model generated using the original data was near to the true value in the early data stages, as shown in the upper left of Figure 8, but, when the genuine value changed little, its prediction accuracy was considerably reduced.The SVR model constructed with SG-preprocessed data achieved robust prediction performance closer to the true value.This demonstrates how SG preprocessing technology can increase data quality and, thus, the model's robustness and detection accuracy.Although preprocessing can successfully reduce the impacts of noise and scattering on spectral data analysis, redundant and overlapping band data still exist in full-spectrum data [45].Using full-band modeling not only results in computing inefficiencies, but also lowers the model's prediction accuracy [46].As a result, typical wavelength screening is necessary for full-spectrum data following SG preprocessing to minimize the dimension of the data and remove information that is unnecessary to the detection indications.This speeds up model training and increases forecast accuracy [47].The competitive adaptive reweighting algorithm (CARS) and sequential projection algorithm (SPA) were employed in this study to detect the characteristic spectral wavelengths of the Brix of crisp pear.Figure 9 depicts the filtered characteristic wavelengths. Figure 9 shows that 42 feature variables were retrieved after applying the CARS algorithm to extract features from the spectral data, accounting for 17% of the total number of hyperspectral variables.The retrieved characteristic bands were mostly centered within 40 nm, and the characteristic variable distribution was reasonably continuous.Figure 10 shows that the SPA yielded a total of 10 distinctive bands, accounting for 4% of the total number of hyperspectral spectra.Its distinctive bands were primarily concentrated within 50 nm, with a relatively high frequency of characteristic bands occurring at the trough.Figure 11 depicts typical results of wavelength screening using the CARS method.The number of Monte Carlo samples in this approach was set to 50, the cross-validation number was set to 10, and the ideal principal component number was set to 10. Figure 11 depicts how the number of distinctive wavelengths changes as the number of Monte Carlo sampling iterations rises.Figure 11 shows that, when the number of iterations rose, the root mean square error cross-validation (RMSECV) of each subset changed.The early phases of this transition indicated that the model's prediction error gradually decreased via the deletion of redundant and irrelevant information.However, as the iterations progressed, the inaccuracy increased, which may have been due to over-screening and picking too many features.Figure 10 depicts the path map of the variable's regression coefficient during the sampling operation.In Figure 10a, the prominent red vertical line properly identifies the minimum RMSECV, as well as the best subsection, chosen with the CARS algorithm.The dimensionality of data can be reduced by projecting them step-by-step to discover the most important features using the sequential projection algorithm (SPA).In this study, we used SPA to screen out the characteristic wavelengths that were most important to the prediction aim from 237 wavelengths of the sugar content spectrum.Figure 10b depicts the link between the number of features and the root mean square error (RMSE).The RMSE rapidly lowered as the number of selected characteristics rose, as shown in Figure 10.This means that, as more relevant features were incorporated into the model, the prediction accuracy was further increased. We created an SVM model using feature data and compared the results obtained on the training and prediction sets with those of two alternative feature extraction strategies, as shown in Table 4. Table 4 shows that the prediction set performance of the SVM model constructed using the two feature wavelength extraction approaches was quite similar, with CARS performing relatively well.The CARS algorithm's optimal point for feature extraction, RMSECV, was 0.4550, the number of iterations was 34, and 42 essential feature wavelengths were successfully screened out.SPA effectively screened out the 10 most representative distinctive wavelengths from the original 237 wavelengths when the RMSE reached the lowest threshold of 0.6542.Despite SPA filtering fewer and lighter feature wavelengths, the indicators were inferior to those extracted with CARS.The cause for this is that, during the feature-screening phase, the SPA deleted too many features, resulting in the remaining wavelengths being insufficient for accurately reproducing the original spectral properties.Because the SPA performed poorly in screening higher wavelengths, we applied CARS in the following tests to extract distinctive wavelengths. Optimization Algorithm Effectiveness (1) Performance comparison of SVR and GASVR models The SG-preprocessed full-wavelength data were used to build a genetic-algorithmoptimized support vector machine regression (GASVR) model.The outcomes of this method were compared with those from the classic support vector machine regression (SVR) model.Table 5 displays the specific outcomes.The results in Table 4 show that, compared with the SVR model, GASVR exhibited stabler and superior fitting on the training set (R 2 c = 0.8945, RMSEC = 0.4709).Specifically, the determination coefficient (R 2 p = 0.8156) on the prediction set was 0.0321 higher than that of the SVR model (R 2 p = 0.7835).The root mean square error on the prediction set was 0.0267 lower than that of the SVR model (RMSEP = 0.4442).This verified that optimization using the genetic algorithm substantially improved the performance of the support vector regression prediction model; ( 2) Construction of GASVR regression model To find the best model for predicting crisp pear sugar content, we built a GASVR model using the 48 and 10 distinctive wavelengths screened using CARS and SPA, respectively.We then used the full-wavelength GASVR model prediction findings as the reference standard.Table 6 shows the prediction impacts of the GASVR model for these three different preprocessed wavelength inputs.Table 6 shows that most of the final performance indicators of the GASVR model established using the wavelengths processed via feature engineering were better than those of the GASVR model established using the original wavelengths.The final performance index of the GASVR model established after CARS characteristic wavelength extraction was generally higher than that of the model established after SPA characteristic wavelength extraction.Therefore, the optimal model was the SG-CARS-GASVR model established using the characteristic wavelengths filtered with CARS, which achieved an R 2 = 0.8992 and an RMSE = 0.4400.Ranked second was the MSC-CARS-GASVR model, established using the characteristic wavelengths filtered with CARS, which achieved an R 2 = 0.8812 and an RMSE = 0.4310.The results on the calibration set (R 2 = 0.8550 and RMSE = 0.4709 ) were substantially improved compared with those of the full-wavelength model.To summarize, the GASVR model produced more accurate predictions, and the SG-CARS-GASVR model produced the best prediction performance overall.Scatter plots of the overall evaluation of the CARS-GASVR model, the test set evaluation, the training set evaluation, and the fitting diagrams of the test and true values in the test set are shown in Figure 12. We developed a method for estimating pear sugar content based on a support vector machine (SVM), and we optimized the model using the genetic algorithm (GA), with the goal of increasing prediction accuracy.We also conducted a comparison of our results with those obtained in other studies, as shown in Table 7. Conclusions We developed a method for estimating the sugar content in pears using a support vector machine (SVM).We optimized the model using the genetic algorithm (GA), with the goal of increasing the accuracy of sugar content prediction.First, crisp pear spectral data were subjected to extensive preprocessing, including standard normal transformation (SNV), multivariate scattering correction (MSC), and convolution smoothing (SG).The SG technique produced the best results in the preprocessing stage.Finally, the competitive adaptive reweighting (CARS) method and the continuous projection algorithm (SPA) were used to screen the characteristic wavelengths, and an optimized GA support vector machine (GASVR) model was built on this basis. According to the findings, the GASVR model increased the prediction set correlation coefficient by 0.0321 compared with that of the classic SVM model, resulting in higher prediction accuracy.The GASVR model also reduced the root mean square error on the prediction set by 0.0267 compared with that of the SVR model.The CARS approach chose 48 distinctive wavelengths, whereas the SPA method chose 10 important wavelengths throughout the feature selection process.Both methods outperformed the full-wavelength model, demonstrating the efficiency of the characteristic wavelength selection approach.The CARS-based GASVR model performed exceptionally well.Finally, the C (3.22) and g (0.51) parameters of the support vector machine optimized using the genetic method were determined.Compared with the full-wavelength GASVR model, the optimized model's coefficient of determination was 0.0442 higher, and the root mean square error was 0.0309 lower.As a result, we illustrated the effectiveness of employing a genetic algorithm to refine SVM to create a crisp pear sugar content prediction model. In the future, we plan to add noise to the original data for training during the preprocessing stage to improve the model's generalization ability; additional quantitative research will also be conducted on the optimization of model parameters to investigate the specific impact of these parameters on model training.Given the possibility of using more kinds of fruit and sugar content ranges in practical applications, more thorough empirical studies and verification under varied environmental conditions will be important aspects of future study. Figure 1 . Figure 1.A total of 168 crisp pear samples were collected in November 2022, in Ya'an City, Sichuan Province, China (29.9890 latitude north, 102.9820 longitude east). Figure 3 . Figure 3. Raw spectral data.The curves of different colors represent the different wavelengths collected by different samples during data collection.The horizontal coordinate represents wavelength data and the vertical coordinate represents reflection data. Figure 4 . Figure 4. Comparison chart of processed hyperspectral data after (a) multivariate scattering correction, (b) convolution smoothing, and (c) standard normal transformation.The curves of different colors represent the different wavelengths collected by different samples during data collection.The horizontal coordinate represents wavelength data and the vertical coordinate represents reflection data. Figure 5 . Figure 5. Support vector machine classification diagram.The solid line simulates the decision boundary that maximizes the classification interval of the data points, and the space between the two dashed lines represents the maximum confidence interval that exists under the decision boundary. Figure 6 . Figure 6.Implementation flow chart of SVM algorithm based on genetic algorithm optimization. Figure 7 . Figure 7. Prediction results of the linear regression model (a) and random forest model (b), respectively.The shaded part in red indicates the error range for fitting data points under this curve. Figure 8 . Figure 8. Line chart of predicted and true values under different data processing techniques: (a) the prediction result of the original data; the hyperspectral data after (b) convolution smoothing and (c) standard normal transformation.(b) The line chart corresponding to the standard normal transformation data; the prediction map corresponding to the (c) multivariate scattering correction method and (d) convolution smoothing method. Figure 9 .Figure 10 . Figure 9. Two types of feature band extraction algorithms were used to extract band differences: the extraction result of (a) CARS algorithm and (b) SPA. Figure 11 . Figure 11.Characteristic wavelength screening results obtained with CARS.(a) Change in the number of CARS wavelengths as the number of iterations increases; (b) the change curve with increasing number of RMSECV iterations. Figure 12 . Figure 12.The final model prediction effect.The blue circle and red asterisk in the figure represent the sample points, and the curve in the figure represents the fitting under the corresponding sample points.(a) Test set result graph (performance measurement graph); (b) All samples fit prediction graph (test set performance measurement graph); (c) Training set result graph (training set performance measurement scatter plot of CARS-GASVR model); (d) Comparison of prediction results on test set (test value and true fitted plot of values). Table 1 . The training set and the dataset were divided according to a ratio of 118:50; the distribution of the data under different criteria is summarized. Table 2 . The performance of three benchmark models under different test indexes is compared. Table 3 . The performance of three pretreatment methods compared with the original data under the SVM model is compared. Table 4 . The performance of two feature extraction methods after SG preprocessing under SVM model is compared. Table 5 . The performance of SVR model before and after GA optimization is compared. Table 7 . Comparison of results among the proposed method and previously reported methods.
9,415
2024-04-01T00:00:00.000
[ "Agricultural and Food Sciences", "Computer Science" ]
Research on the Driving Effect of Industrial Ecological Innovation Efficiency in the Information Age : With the “Internet plus”, the concept of Industry 4.0, the Internet seems to have become many industrial enterprises in the way of transformation and upgrading of “Life-saving straw”. According to the present situation of industrial development in China, a dynamic system of promoting industrial ecological innovation efficiency is constructed, and the influencing factors of industrial ecological innovation efficiency are analyzed by using multiple regression model, it is found that the factor endowment in the basic power has a significant impact on the efficiency of industrial ecological innovation, and the market-oriented reform and technological progress in the power of innovation have a significant impact. It also makes use of the innovation power and the basic power to do the dual regression, which proves that both of them have a significant positive driving effect on the industrial ecological innovation efficiency, and the driving effect of fundamental power on the efficiency of industrial ecological innovation is slightly stronger than that of innovation power. Introduction In the Information Age, China's industry is developing rapidly. However, China's industry is facing outstanding problems and challenges. A large number of industrial pollution and resource consumption have caused unprecedented damage to the ecological environment. At the 19th Party Congress, General Secretary Xi Jinping pointed out that the driving force of economic development is shifting from factor-driven to innovation-driven, showing that sustainable economic development is the path of developing ecological innovation. The industrial eco-innovation efficiency, which aims at promoting the sustainable development of environment, economy and society, is a new driving force system of eco-innovation. Therefore, it is necessary to explore which factors have a specific effect on the efficiency of industrial eco-innovation, and to explore the power system and the final driving effect of industrial eco-innovation efficiency. In national and international scientific research, the efficiency of industrial ecological innovation is increasingly becoming a research hotspot. As for the definition of industrial eco-innovation, Fussler & James first proposed Green Innovation in 1996, and the following year James defined it explicitly as "New products and processes that significantly reduce environmental impacts and add value to customers and enterprises" [1] . Han jieping holds that industrial ecological innovation is a process, technology, operation, system and product innovation, which should consider economic, social and environmental benefits [2] . Fan Decheng argues that there are similarities and differences among the driving forces of industrial eco-innovation in different regions [3] . In the aspect of influencing factors of industrial eco-innovation efficiency dynamic system, Han jieping thinks that the development of industrial eco-innovation efficiency should be analyzed from MACRO, Meso and micro levels [4] . Hui Shupeng and others think that the innovation power is the key power of the industrial ecology innovation efficiency growth [5] . Generally speaking, researchers at home and abroad are at the initial stage of studying industrial ecological innovation, and have not yet established a sound evaluation index system of industrial ecological innovation efficiency, there are few studies on the dynamic system and driving effect which affect the efficiency of industrial eco-innovation. Dynamic index system Considering the availability of data, the impact factors are summarized into two types, the first type is the basic factor of industrial development, reflecting the current situation of industrial development, environment and input, and the second type is the innovation factor of industrial transformation, system Innovation, technology innovation and all kinds of innovation factors of information technology influencing industrial development in the new era. regression modeling The basic factors and innovation factors are selected to make a statistical regression model about the innovation efficiency of industrial ecology. The regression model is as follows: Y a a lnX a lnX a lnX a lnX a lnX a lnX a lnX a lnX e (1) Among them: Y is the dependent variable of industrial ecological innovation efficiency, XI (i = 1,2,3...) is the independent variable, X 1: Industrial Development Scale (IDS) , x 2: Factor Endowment (Fen) , X 3: openness to the outside world (FCD) , X 4: energy consumption structure (ECS) , X 5: Environmental Regulation (ENR) , X 6: Market Oriented Reform (Mor) , x 7 is the technical progress (Tep) , X 8 is the technical fusion (Tei) , (i = 1,2,3...) is the coefficient of the independent variable, is the constant term, e is the random error. So the regression model is: Based on the statistical regression analysis of the above eight influencing factors, we can classify these eight factors into two kinds of driving forces, one is the basic power (BSP) , which includes the industrial ecological scale, factor endowment, the degree of opening to the outside world and the energy consumption structure, the other is the innovation impetus (Ind) , including environmental regulation, market-oriented reform, technology progress and technology integration. By classifying every four factors into a class of driving forces, we make a binary statistical regression model, which is as follows: Among them, y is the dependent variable of industrial ecological innovation efficiency, BI (i = 1,2,3...) is the independent variable coefficient, Zi (i = 1,2,3...) is the independent variable, BSP and Ind.. Through STATA13.1 software, the statistical regression model of industrial eco-innovation efficiency dynamic system is as follows: analysis of regression results Among the basic factors, factor endowments have a significant positive impact on the efficiency of industrial ecological innovation, indicating that the improvement of industrial ecological innovation efficiency is still inseparable from fixed asset investment and personnel labor to a certain extent. The degree of opening to the outside world has a positive impact on the efficiency of industrial ecological innovation, which shows that the total industrial import and export volume and local GDP are the factors that affect the efficiency of industrial ecological innovation. The scale of industrial development has a positive impact on the efficiency of industrial ecological innovation, indicating that the development of my country ' s industry is steadily improving. The impact of energy consumption structure on the efficiency of industrial ecological innovation is negatively correlated. As a non-renewable energy source, excessive consumption of coal will directly affect the improvement of industrial ecological efficiency. Among the innovation factors, market-oriented reforms and technological progress have a significant negative impact on the efficiency of industrial ecological innovation, indicating that market-oriented reforms will hinder the improvement of industrial ecological innovation efficiency. Technological progress has a significant negative impact on the efficiency of industrial ecological innovation, indicating that the index system that affects the efficiency of industrial ecological innovation will hinder the improvement of industrial ecological innovation efficiency. Technology integration has a positive impact on the efficiency of industrial ecological innovation, indicating that the integrated development of my country's industry and information technology will actively promote the improvement of industrial ecological innovation efficiency. Environmental regulations also have an impact on the efficiency of industrial ecological innovation, indicating that the efficiency of industrial ecological innovation in my country will be affected by many aspects and fields. Conclusion This paper firstly constructs a regression model of industrial ecological innovation efficiency by constructing a statistical regression model of the influencing factors of industrial ecological innovation efficiency, dividing basic power and innovation power into four types of influencing factors, and constructing a regression model of industrial ecological innovation efficiency. On the basis of the preliminary multiple regression model, the factors that have positive effects include industrial ecological scale, factor markets, degree of opening to the outside world, and technological progress. Factors that have negative effects include energy consumption structure, environmental regulations, market reforms, and Technology integration. Secondly, by constructing a statistical regression model of the driving forces of industrial ecological innovation efficiency.The results show that the driving effect of basic power is slightly stronger than that of innovation power.Moreover, basic power and innovation power have a significant positive impact on the driving force of industrial ecological innovation efficiency. The driving role is gradually strengthening.It can be seen from the statistical regression model that it seems that as long as the investment in basic power is increased, the efficiency of industrial ecological innovation can be improved.But this is not the case,The driving effect of innovation power on the efficiency of industrial ecological innovation may show a nonlinear effect with some control variables.Breaking the constraints of the environment and resources and striving to achieve the sustainable development of the industry must do everything possible to improve the efficiency of industrial ecological innovation.
1,965.6
2021-01-01T00:00:00.000
[ "Environmental Science", "Economics" ]
On the Singular Spectrum of the Radiation Operator for Multiple and Extended Observation Domains The problem of studying how spatial diversity impacts on the spectrum (singular values) of the radiation operator is addressed.This topic is of great importance because of its connection with the so-called number of degrees of freedom concept which in turn is a key parameter in inverse source problems as well as to the problem of transmitting information by waves from a source domain to an observation domain.The case of a bounded rectilinear source with the radiated field observed over multiple bounded rectilinear domains parallel to the source is considered. Then, the analysis is generalized to two-dimensional extended observation domains. Analytical arguments are developed to estimate the pertinent singular value behavior.This allows highlighting the way observation domain features affect spectrum behavior. Numerical examples are shown to support the analytical results. Introduction Determining the number of degrees of freedom (NDF) of the radiated field is one of the classical and most relevant problems in electromagnetics and in optics.This is because the NDF is a crucial parameter which characterizes both forward and inverse source problems.The reader can refer to the paper by Piestun and Miller [1] for a thorough account about how the research on this field has progressed since the pioneering works of Gabor [2] and di Francia [3]. The NDF represents the number of significant and independent parameters needed to represent the radiated field with a given degree of accuracy [4].Moreover, it is also relevant in inverse source [5] and inverse scattering problems [6] as it is linked to the resolution achievable in the inversions.By interpreting the radiation phenomenon as a way to propagate information from a source domain to an observation domain, the NDF is connected to the question of estimating the number of the available communication channels [7].This point of view is of fundamental importance in space-time wireless systems and in particular to multipleinput multiple-output (MIMO) communication systems.The link between the electromagnetic NDF and Shannon's information theory has been recently discussed in [8].The NDF is also linked to the concepts of the -entropy and -capacity which characterize the topological information theory introduced by Kolmogorov and Tihomirov [9].In this context, as shown in [10], the NDF gives a measure of the number of -distinguishable messages that can be conveyed back from the noisy data (with being the noise level) in order to recover the source.In that paper, the connection between the topological information theory and Shannon's one has been also discussed [11]. The NDF can be estimated by adopting diffraction arguments or sampling approach [7].Alternatively, as the radiation operator is a linear nonsymmetric compact operator [12], its singular value decomposition (SVD) [13] provides a further way of tackling the problem.In particular, the singular value decomposition (SVD) should be preferred as it allows for an easier understanding of the flow of information [7].Moreover, subsets of the range of the radiation operator which are spanned by the singular functions exhibit extremal properties [14].In other words, exploiting functions which are different from the singular functions leads to the use of more parameters [1]. It is known that the singular values of a compact operator cluster to zero as their index grows.In addition, as the regularity of the kernel increases, the singular values decrease more and more quickly [15].Accordingly, since the kernel function of the radiation operator behaves like an entire function of exponential type [16] (when the source and the observation domains do not overlap), the singular values exhibit an abrupt exponential decay beyond a critical index which in general depends on the size of the scatterers, the working frequency, and the observation domain.This has been shown explicitly for some particular configurations for which multipole expansion coincides with the radiation operator spectrum [8,17].In these cases, the singular values exhibit an almost step-like behavior [18] (i.e., the radiation operator is almost rank deficient) and the NDF can be quite naturally estimated as the number of singular values preceding the knee.What is more, for such a case Shannon's number (i.e., the operator trace) allows obtaining an NDF estimation without the need of explicitly working out the singular value behavior [19]. In most general cases (as the ones addressed herein), the step-like behavior is not met.In these cases, defying the NDF is not so trivial.Indeed, noise and available a priori information enter the picture so that the NDF becomes dependent on them. As to the inversion problem, noise and a priori information can be exploited in the regularization procedure.The simplest way to achieve regularization is by numerical filtering, that is, by truncating the SVD expansion in order to establish a compromise between the truncation error and the noise contribution.If the noise level is known and it is assumed that the solution norm is constrained to be ≤, then the projections corresponding to the singular values below to / are discarded [13].The very popular Tikhonov variational method provides a smoother filtering of the singular values that does not require truncation.However, in practice the reconstruction series must be truncated and the truncation index is usually chosen as above.Note that the same results are achieved if the noise and the unknown source are considered as uncorrelated white Gaussian random process with variance equal to 2 and 2 , respectively [20].The same results are also obtained by employing the probabilistic approach presented in [21]. Turning to consider the problem from the information point of view, under the same constraints as above, it has been shown that the number of distinguishable messages which can be sent back from data to recover the unknown source is just dependent on the singular values above the threshold / [10].Moreover, looking at each one-to-one relationship between the left and the right singular functions as communication channels with gains given by the corresponding singular values, the above condition guaranties that the channels with gain lower than / convey an amount of Shannon's information which is less than ln 2/2 [22]. All the previous arguments suggest to identify the NDF by a truncation criterion.Moreover, they highlight the role played by the singular value behavior.Therefore, in this paper we focus on the estimation of the singular value behavior and how spatial diversity effects it.However, it must be remarked that previous constraints, being in some sense global, do not assure that the bulk of the unknown source is recovered nor that the conveyed information is maximized. Source domain Observation domain 1 Observation domain 2 This happens when the source projects significantly over high order singular functions, when the noise is colored, or when some kind of constraints is exploited [8].In these cases identifying the NDF is more involved and the knowledge of the singular value behavior, even though still important, provides only a partial picture of the problem.As said above, the focus here is on the estimation of the singular values of the radiation operator when the radiated field is collected over multiple observation domains.For the sake of simplicity, the problem is addressed for a twodimensional scalar geometry.The source is assumed to be supported over a bounded rectilinear domain, whereas the radiated field is observed either over multiple bounded rectilinear domains or over a two-dimensional observation domain.Green's function is written under the Fresnel zone approximation. This problem has been addressed previously in [23] for the case of two observation domains.There, by numerical results, it is shown that the second observation domain can lead to a two-step behavior for the singular values.However, the number of significant singular values remains the same as for the single observation case and predictable through a geometrical criterion.Afterwards, the research progressed in [24] where the mathematical rationale of the problem was derived.These results provided a tool to accurately estimate the singular value behavior and to foreseen whether a two-step occurs.However, the analysis in [24] was limited to the case of observation domain of the same size.In this contribution we complete the analysis, by extending the results in [24].To this end, we first consider the complementary situation of two observation domains having different extents but subtending the same observation angular sector.Furthermore, the case of two-dimensional observation domain is addressed. Problem Formulation and Mathematical Preliminaries Let us consider the two-dimensional scalar configuration depicted in Figure 1 where invariance is assumed along the -axis. The field radiated by an electric current supported over the segment = [− , ] of the -axis is observed over the observation domain located in the Fresnel zone. Two cases are considered.In the first one, the observation domain consists of an ensemble of segments along the -axis at different distances from the source, that is, = ⋃ with = [− , ] located at .In the second one, the observation domain In terms of operator notation, in the case of observation domains, the radiation phenomenon can be written as where, a part from a scalar factor, with being the free-space wavenumber and Instead, for the extended observation domain (1) modifies as where the operator A now reads as In ( 1) and (3), 2 (⋅) means the set of square integrable functions supported over the domain enclosed on the brackets.Such functional spaces are equipped with the usual scalar products.More in detail, in the case of multiple domains, it results that whereas for extended observation domain In order to estimate the behavior of singular value associated with the operator A, in the next sections we tackle the associated eigenvalue problem where A † is the adjoint of the operator A and are the squares of the singular values of the operator A. However, before proceeding further along this path, first some basic mathematical facts are here recalled.Let us denote by B Ω the band limiting projector, that is, so that the B Ω () spectrum is null for ∉ Ω.Here Ω is assumed to be a single compact interval but needs not to be centered around the zero frequency. The spatial limiting projector P () is defined as Furthermore, we introduce the operator P B Ω P ().When both Ω and are centered around the zero, this operator assumes the very familiar expression where () and (Ω) are the measures of such intervals.This operator has been extensively studied in the literature [25,26].It is a compact self-adjoint definite positive operator whose eigenspectrum is given in terms of the prolate spheroidal wave-functions () = (, )/√( ()). Here, = ()(Ω)/4 is the so-called spatial-bandwidth product, (, ) is the th prolate function, and () are the corresponding eigenvalues that enjoy a step-like behavior: they are almost equal to one till the index reaches = [2/], [⋅] being the greater integer lower than its argument.Beyond such an index they decrease abruptly (i.e., exponentially) to zero. Having fixed () and (Ω), when Ω and/or are not centered intervals, P B Ω P is unitary equivalent to the operator (10).Accordingly, eigenvalues hold the same, whereas eigenfunctions are easily linked to (, ) by unitary transformations. The following operator plays a crucial role for our analysis: where Ω 1 and Ω 2 are disjoint bands and 1 and 2 are amplitude factors.As shown in [24], the eigenvalues can be very well approximated in terms of those associated with each single operator.Indeed, if 1 and 2 are both greater than one, then where 1 and 2 are the eigenfunctions of P B Ω 1 P and P B Ω 2 P , respectively.Of course, equality to zero never holds as such operators are positive definite and hence have empty null spaces.In particular, ( 12) specially holds for either International Journal of Antennas and Propagation Accordingly, the eigensystem of ( 11) can be approximated as that is, as the union of the eigenspectra associated with the two single operators.(From now on, in order to avoid confusion, when necessary, the eigensystem corresponding to an operator A will be denoted as We conclude this section by reporting the following proposition which will be useful for the case of extended observation domain. Let us consider a convolution operator with the kernel function () ∈ 2 R .Of course, this is a Hilbert-Schmidt operator and it is thus compact.Let us denote with () the Fourier transform of ().() is assumed to be a real positive function and of compact support where K = max Let us introduce two "auxiliary" operators written as Now, the following proposition can be stated. , and [K], are the eigenvalues of K, K, and K, respectively.Then The proof is omitted but follows from Lemma 3.1 reported in [27]. Previous Results We start by recalling previous results concerning the case of two observations domain of equal size [− 1 , 1 ] located at 1 and 2 , respectively, with 2 > 1 . In this case, the relevant eigenvalues problem (7) writes explicitly as The eigensystem of operator in (19) is not known in closed form.However, a simple approximated model can be worked out.To this end, it is noted that in Fresnel zone 1/ is a slowly varying function.Therefore, by assuming that 1/ 1 ≃ 1/ 2 in the exponential terms, ( 19) can be recast as where and exp(/(2 1 ) 2 ) () = ũ ().Now, according to results pertinent to operator (11), it results that the eigensystem of ( 19) can be well approximated by the union of the eigensystem of the three Slepian operators.Hence, the eigenvalues exhibit a two-step behavior: (iii) other eigenvalues almost 0 due to eigenvalues of the three operators that decay exponentially. In particular, the above theory allows to forecast a single step behavior when the integer parts of the last two addends in the expression of 2 are zeros.In general, we expect a doublestep behavior for the singular values.More details and the numerical check of this result are reported in [24].In particular, previous model holds true even 2 ≫ 1 . Hence, for the two-observation domains of equal size it can be concluded that the second domain entails a twostep behavior for the singular values.Therefore, the NDF depends on the noise that set the threshold above which the singular values can be considered significant.If significant means as compared with zero then the NDF coincides with that obtainable by using the single observation domain which subtends the largest observation angular sector, that is, max{ 1 / 1 , 2 / 2 }.However, as part of the singular values have higher amplitude, the strength of the connection is increased [7].Equivalently, while tackling the inverse problem, the inversion is expected to be more stable. As a concluding remark we note that previous analysis can be easily adapted to account for more than two observation domains.In this case singular values will exhibit a multistep behavior as long as the spatial-bandwidth products involved in the pertinent version of (20) are all sufficiently greater than one. Two Observation Domains That Subtend the Same Angular Sector We now turn to address the case when the two observation domains subtend the same observation angular sector (see Figure 2).This means that we assume 1 / 1 = 2 / 2 .Under the same assumption as in previous section (20) particularizes as with ).This is now a standard Slepian operator whose eigenvalues have a step-like behavior with the knee occurring at = [2/], with = 2 / 2 .Therefore, it can be readily concluded that by adding further observation domains which subtend the same observation sector does not change the single step behavior which pertains the single observation domain.Rather, this leads to only an increase of the numerical value of the singular values across their flat part.This result as well as the one recalled in the previous section can have a simple interpretation from the diffraction arguments perspective.Indeed, the eigenvalue two-step behavior couls be exptected if the observation domains are characterized in terms of the angular sectors they subtend.In fact, the first stronger flat region can be seen as being due to the information collected over 1 and 2 under the angular sector subtended by 2 (which is common to both observation domains) and the second flat region as being due to only 1 and collected under the remaining directions belonging to the angular sector subtended by 1 .Therefore, when 1 and 2 subtend the same angular sector, a single flat part must be observed with the numeric value of the singular values doubled with respect to the single observation case. These arguments are convincing and very well verified in Figure 3. However, when the distance between the two observation domains is increased, the singular values of the radiation operator no longer enjoy the single step behavior.Indeed, as shown in Figure 4, despite the fact that the theory developed so far would predict the same behavior as in Figure 3, now the singular values exhibit a two-step behavior.More interestingly, the knee of the second step occurs at an index greater than the one that would be expected according to the observation angular sector. It is clear that this means that the approximated model employed to derive (21) does not work any longer.Hence, it is necessary to take a step back in order to better analyze the model.To this end, we relax the hypothesis that 1/ 1 ≃ 1/ 2 and rewrite (19) as By looking at Figure 5, where the example of Figure 4 is rerun, it is seen that now the new approximated model works fairly well in predicting the squared singular values of the radiation operator as the two-step behavior is very well reproduced. Extended Observation Domains Previous results can be trivially generalized to the case of multiple observation domains.However, the same theoretical arguments can be applied as long as the spatial-bandwidth products occurring while arranging the pertinent operator as in (20) or (25) are sufficiently greater than one.In [28], it is shown that each spatial-bandwidth product can be as low as 4.This puts a limit on the number of rectilinear observation domains that can be taken within a fixed extent along of the observation domain. This drawback can be completely avoided if the problem is directly cast by considering a two-dimensional observation domain.The cases that will be addressed herein are sketched in Figure 6. Let us start from the configuration reported in Figure 6(a). In this case it results that By adopting the same approximation as done for the domains of equal extent, (26) can be rewritten as with Hence, the problem is cast as the study of the convolution operator (27) whose kernel function () has a Fourier transform given by When ( 1 / min − 1 / max ) 1 ≤ 1, then with Ω 0 = [− 1 / max , 1 / max ].Therefore, the eigenvalues of (26) (and hence the singular values of the corresponding radiation operator) are very well approximated by a steplike behavior.This is shown in Figure 7. Hence, the NDF basically remains the same as the single observation domain. On the contrary, the numerical values across the flat part have drastically increased at ( max − min ), which for the presented example is 10.Approximation in (29) cannot be invoked if the extent of the observation domain along is increased.Then, according to the proposition reported at the end of Section 2, we can construct the two auxiliary operators Now, as long as = 1 Δ/2 is sufficiently greater than 1 (in the sense explained above), the eigenvalues of à † A and  † A can be foreseen by applying the same reasoning as in (11).Accordingly, they can be used to estimate those of A † A. The way to achieve that is summarized in the following statement. The goodness of this statement can be appreciated by the example reported in Figure 8.As expected, the first 0 eigenvalues are almost constant.Beyond such an index, however, the eigenvalues decay more gracefully than the previous case.Furthermore, the role of the observation extent along is still more evident than in the result of Figure 7. International Journal of Antennas and Propagation Figure 6: Geometries of the problem for the case of twodimensional observation domain. Indeed, the numerical value of the eigenvalues are greatly increased (up to max − min = 100 times) than the single observation domain. The same analysis can be repeated for the observation domain depicted in Figure 6(b).For such a case we have that where denotes the observation angular sector and the same approximation as in ( 24) has been exploited.Also here, a convolution operator has to be studied but now the kernel function () has a Fourier transform given by when /2(1/ min − 1/z max ) < 2. Finally, a statement similar to Statement 1 can be easily derived which allows to foreseen the singular value behavior. Conclusion In this paper, we continued the research on the way the spatial diversity impacts on the singular value behavior of the radiation operator we started in the papers [23,24]. As in those papers, here the study has been developed for a canonic two-dimensional scalar configuration where the source and the observation domains were represented by bounded parallel strips.Also the case of an extended observation domain has been addressed.These simple scenarios allowed us to develop analytical arguments which clearly permitted to estimate (also quantitatively) the singular value behavior.In particular, for the case of a two-dimensional observation domain, upper and lower bounds for the singular values have been determined: these permitted to estimate the number of singular values which are above a given threshold.It is important to remark that the method developed for addressing two-dimensional observation domains provides a tool for analyzing more general convolution operators provided that they are of Hilbert-Schmidt class. It has been shown that the main effect of considering multiple observation domains is a shaping and and a magnitude amplification of the singular values.In particular, magnitude increasing can be considerable in the case of extended observation domains as it is proportional to its size max − min along depth.Moreover, it has been shown that the number of significant singular values can be greater than those predicted by conventional diffraction arguments.In particular, this happens when the observation domains subtend the same angular sector and are sufficiently apart from each other. The addressed problem and the obtained results are relevant not only from the mathematical point of view but also for classical electromagnetic problems such as the inverse source and the transmission of information.This is because the singular values of the radiation operator are intimately connected to the concept of NDF.Indeed, when some global constraints are employed (as discussed in the introduction), the number of relevant singular values right coincide with the NDF.Under this circumstance, the results described above can be rephrased by saying that spatial diversity can allow for a more stable inversion procedure, or by changing perspective, that it entails a significant growth on the information content [10]. As a concluding remark, we note that the extension of the present research to the case of a planar source and a volumetric observation domain is rather simple as in the Fresnel zone the kernel factorizes with respect to the two transversal coordinates.Furthermore, addressing far zone cases is even more simple.Instead, making the observation domains in the source near zone appears more complicated.We defer this topic for future developments. Figure 1 : Figure 1: Geometry of the problem for the case of two observation domains. Figure 2 : Figure 2: Geometry of the problem for the case of two observation domains which subtend the same observation angular sector. Figure 3 : Figure 3: Two observation domains that subtend the same angular sector for the case of = 20 , 1 = 20 , 2 = 25 , 1 = 100 , and 2 = 125 .Comparison between the squared singular values of the radiation operator (denoted as actual) and the eigenvalues of the approximate model given by (21) (denoted as approx.). Figure 4 : Figure 4: Two observation domains that subtend the same angular sector for the case of = 50 , 1 = 20 , 2 = 50 , 1 = 240 , and 2 = 600 .Comparison between the squared singular values of the radiation operator (denoted as actual) and the eigenvalues of the approximate model given by (21) (denoted as approx.). InternationalFigure 5 : Figure 5: Two observation domains that subtend the same angular sector for the case of = 50 , 1 = 20 , 2 = 50 , 1 = 240 , and 2 = 600 .Comparison between the squared singular values of the radiation operator (denoted as actual) and the eigenvalues of the approximate model given by (25) (denoted as approx.). [A] and [A].The same type of notation will be used also for the singular value decomposition.Instead, we maintain the notation for the eigenvalues associated with the prolate spheroidal functions, with a clear indication of the spatial-bandwidth product () when necessary.)Hence,ordered in nonincreasing way exhibit a two-step behavior.The first knee occurs at[2 1 /] (when 1 > 2 ) or [2 2 /] (for 2 > 1 ), whereas the second one is at [2 1 /]+[2 2 /].Moreover, the first eigenvalue jump is related to the ration 1 / 2 .
5,839.4
2013-07-18T00:00:00.000
[ "Physics", "Engineering" ]
Creative destruction: Sparse activity emerges on the mammal connectome under a simulated communication strategy with collisions and redundancy Signal interactions in brain network communication have been little studied. We describe how nonlinear collision rules on simulated mammal brain networks can result in sparse activity dynamics characteristic of mammalian neural systems. We tested the effects of collisions in “information spreading” (IS) routing models and in standard random walk (RW) routing models. Simulations employed synchronous agents on tracer-based mesoscale mammal connectomes at a range of signal loads. We find that RW models have high average activity that increases with load. Activity in RW models is also densely distributed over nodes: a substantial fraction is highly active in a given time window, and this fraction increases with load. Surprisingly, while IS models make many more attempts to pass signals, they show lower net activity due to collisions compared to RW, and activity in IS increases little as function of load. Activity in IS also shows greater sparseness than RW, and sparseness decreases slowly with load. Results hold on two networks of the monkey cortex and one of the mouse whole-brain. We also find evidence that activity is lower and more sparse for empirical networks compared to degree-matched randomized networks under IS, suggesting that brain network topology supports IS-like routing strategies. Communication demands such as flexibility and reliability place important constraints on the system (Poggio, 1984;Hahn et al., 2018). Effective and robust routing strategies are especially important on the mammal connectome because the topology is such that paths of just a few synapses exist between any node and practically any other node. Brain networks also need to dynamically exchange multimodal information among a large number of nodes in real time without appreciably changing network topology. In addition, routing in the brain lacks central control. To begin to understand what routing strategy is in use in the brain, one can look to design strategies in engineered systems (Graham & Rockmore, 2011;Graham, 2014Graham, , 2017Navlakha et al., 2015Navlakha et al., , 2017Fornito, Zalesky & Bullmore, 2016). A fundamental engineering goal for any large-scale communication system is the management of signal interactions. Signal interactions could take many forms such as summation, thresholding, collision, and duplication. As Collision: An event in which signals on two or more of a node's incoming edges arrive at the same time. we describe in the Discussion, brain-like networks with signal interactions involving summations/thresholds tend to produce undesirable behavior such as activity die-off or overload (Kaiser et al., 2007;Kaiser & Hilgetag, 2010). Here we perform initial simulations of two interactions that have received little attention: signal collision and signal duplication (i.e., redundancy). Collisions occur when signals from different sources converge on the same target at the same time. They are the price paid for the ability to route signals selectively to many possible destinations on a fixed network. Collision dynamics are emergent: they are a nonlinear effect of topology, node dynamics, and current traffic. All large-scale engineered communication systems have means for managing collisions, through redress, arbitration, and other strategies (see, e.g., Kleinrock, 1976;Mišić & Mišić, 2014). Given the a priori likelihood of collisions on small-world-like brain networks (compared to, for example, lattice networks), we argue that it is in the interest of brain networks to establish Emergent sparseness on the mammal connectome dynamic interactions that manage collisions successfully across the entire brain. Management of collisions is especially important in mesoscale brain networks since behavior-related information must be exchanged among a large number of subsystems. However, collisions introduce nonlinearities, which are difficult to study analytically. Nonlinear signal interactions are increasingly investigated with explicit agent-based models in the study of other communication networks (e.g., epidemics: Min et al., 2020), but little work in network neuroscience has used explicit approaches. For example, most RW models (e.g., Noh & Rieger, 2004;Abdelnour, Voss, & Raj, 2014), which route signals to randomly chosen outgoing edges use analytical methods that cannot account for signal interactions such as collisions. An exception is Mišić et al. (2014aMišić et al. ( , 2014b. In this pair of studies, buffers (node memory allocations, a form of nonlinearity) were employed to manage collisions in a RW routing model. In this scheme, colliding signals at the inputs of a node are lined up in a queue and stored in node memory until they can be directed to a randomly chosen outgoing edge. Buffers are ubiquitous on the Internet and may be useful in brains. But though several neurobiological mechanisms for buffering have been proposed, they remain hypothetical (see, e.g., Goldman-Rakic, 1996;Graham, 2014;Funahashi, 2015). An alternative to RW models is shortest path routing, which invokes the logic of evolved optimality (e.g., Bullmore & Sporns, 2009;Rubinov & Sporns, 2010;Goñi et al., 2013Goñi et al., , 2014. Shortest path strategies also have parallels in engineered systems like the Internet (e.g., OSPF: open shortest path first routing; see, e.g., Sosnovich et al., 2017). However, shortest path models typically ignore signal interactions such as collisions, again because these models are studied analytically. Yet signal interactions are especially important in a shortest path context. Though it has not been well recognized, successful shortest path routing requires knowledge of current network traffic: a short path is not necessarily short if there is congestion. Even when short path models include small buffers, they show substantial message loss due to collisions (Graham & Hao, 2018). In any case, shortest path routing is considered implausible because it requires global knowledge of network architecture to select shortest paths for all signals (Seguin, Razi, & Zalesky, 2018, but see Mišić et al., 2015. In the present work, we investigate a strategy of fully destructive collisions. The brain may in fact use a less punitive strategy than this, but neurophysiological evidence increasingly supports the notion that signals in the brain regularly collide and are destroyed. For example, Sardi et al. (2017) demonstrated the failure of coincident signals above threshold to elicit spikes in vitro. They also showed that excitatory signals can cancel each other when they collide "head on" (Sardi et al., 2017; this is in fact predicted by the Hodgkin-Huxley model: Scott, 1977). Gidon et al. (2020) have shown exclusive-or (XOR) activity in single ex vivo human pyramidal cortical neurons, a behavior that effectively destroys some colliding signals. At the circuit level, gating circuits (e.g., Steriade & Paré, 2007;Gollisch & Meister, 2010), when closed, can also be conceived as destructive collisions between signals. If collisions need to be managed at the single-cell level and the level of circuits, the same logic applies at the level of mesoscale brain networks (see Discussion). Collisions need not incapacitate a communication network. They may be useful in establishing stable and robust dynamics with locally implemented routing rules. In particular, we suggest that collisions could-almost paradoxically-help promote system efficiency. Destructive collisions may on average help ensure a low level of energy use. Energy efficiency is imperative given that the dynamic connectome of mammals has strong metabolic Emergent sparseness on the mammal connectome constraints (Bullmore & Sporns, 2012;Goñi et al., 2013). Destructive collisions may also promote homeostasis, which appears desirable for maintaining relatively constant activity in the face of changing system demands (e.g., Aeschbach et al., 1997;Turrigiano, 1999). But the brain cannot achieve efficiency simply by using as little energy as possible (Poldrack, 2015). The brain instead operates in sparse fashion, in part because the sensory environment is sparse. To avoid confusion, we note that sparseness (or, equivalently, sparsity) in the present Sparseness (of activity): A characteristic distribution displaying low average activity ratios across time or across units. context refers to the distribution of activity across nodes and over time, rather than to the topology of the network (on a scale from "sparse" to "dense" connectivity). Sparse operation of a system is defined by the achievement of low average activity ratios (see, e.g., Földiák, 2002). Although sparse activity can confer low average energy usage, it does not necessarily imply minimal energy use. Instead, it involves a characteristically non-Gaussian distribution of activity across units, in particular a distribution with a strong peak near zero activity (most units are "off" at a given time) and heavy tails (a small number of units are likely to be highly active). In the brain, it has been understood for decades that only a small fraction of units (10% or less) in a given part of the network can be highly active at a given time (Levy & Baxter, 1997;Lennie, 2000;Attwell & Laughlin, 2003); this is known as population sparseness. In addition, Population sparseness: A measure of the degree of sparseness of activity across units during a given time window. it is estimated that a unit can only be highly active over a small fraction of its lifetime; this is known as lifetime sparseness (see Willmore & Tolhurst, 2001;Graham & Field, 2006). Sparse Lifetime sparseness: A measure of the degree of sparseness of activity of a given unit over time. activity in the visual system, for example, is thought to be in part a result of sparse physical environments (Field, 1994;Olshausen & Field, 1996;Bell & Sejnowski, 1997), in addition to metabolic constraints and other factors (Graham & Field, 2009). We suggest here that collision dynamics that result in emergent sparse activity are good candidates for how the mammal brain routes information. If collisions are destructive, it may be necessary for the brain to adopt strategies to generate redundancy in order to promote reliability. Brain network communication models increasingly include not just a pathfinding dimension (spanning a spectrum from random walks to shortest paths: Avena-Koenigsberger et al., 2019), but also a second dimension that can be seen as redundancy (Avena-Koenigsberger et al., 2017;Bettinardi et al., 2017;Tipnis et al., 2018). A simple way to promote redundancy is to produce redundant copies of messages at each node. Based on this idea, we introduce an information spreading (IS) model of brain network Information spreading: A model of network activity where incoming signals at a node are copied to all outgoing nodes. communication. Signal interactions in a system with collisions and information spreading will produce nonlinear, emergent behavior. For example, as more signals and copies of those signals propagate on the network, the influence of collisions is increasingly important. Using numerical simulations, we investigate emergent behavior of the IS model, in comparison to a standard random walk (RW) model. As described more fully in the Discussion section, IS and Random walk: A model system where activity is passed by "walkers" who move at random from node to node. RW models should be seen as spanning a spectrum of distributed routing strategies, rather than as a test model and a control. We show that mammal brain networks using IS achieve a balance between collisions and redundancy and they produce sparse activity. In particular, we show that, across a range of signal loads, low activity and high sparseness are emergent properties of network-wide communication under an IS model. Performance of the IS model contrasts with RW models, which exhibits high and increasing energy use with increasing signal loads, and lower and decreasing sparseness across loads. To investigate whether empirical topology in the mammal connectome Emergent sparseness on the mammal connectome supports sparse, efficient activity patterns, we also compare dynamics of the anatomical network to randomized networks with matched degree distributions. Randomized network: A network whose edges are randomized with respect to an original network, but that preserves the in-and out-degree distributions of the nodes. Overview Using a Markovian agent-based model on monkey and mouse tracer-based connectomes, we implemented an information spreading (IS) model wherein nodes pass copies of incoming signals to all outgoing edges. Simulations utilized a nonlinear collision rule whereby all colliding messages are destroyed. We also tested a standard random walk (RW) model following the same collision rule. In RW, each message is passed from one node to another through a randomly chosen outgoing edge. At each time step, a set number of messages, which we term the load, is injected to randomly chosen nodes; this is the primary independent variable in this study. Connectomes We utilized three mesoscale mammal connectomes: two of the macaque monkey (Markov et al., 2014, termed monkey1 in the present study as shorthand; and the CoCoMac database, see Bakker et al., 2012, termed monkey2), and one for the mouse (Oh et al., 2014, termed mouse). All edges are directed and have a weight of 1. The monkey1 connectome comprises 91 cortical nodes and 1,615 edges, primarily in the visual system (ipsilateral). Of these nodes, 29 have in-and out-degree > 0, while the remainder are source nodes (in-degree = 0). This means that 62 nodes in monkey1 are not reachable from the rest of the network, and only pass messages injected at those nodes. This limitation is a result of the retrograde tracing method used by Markov et al. (2014), and is the reason we included the monkey2 network as well. For monkey2, we used only nodes that had an in-degree and an out-degree of at least 1. This set comprised nodes corresponding to 163 cortical regions and 5,093 edges. The mouse connectome spans the entire mouse brain (ipsilateral), comprising 213 nodes and 16,954 edges (all have in-and out-degree > 0). For all connectomes, isolated nodes with no incoming or outgoing edges (present only in monkey2) had been removed. We note that both mouse and monkey2 contain self-loops (i.e., nodes with edges to themselves), but removing these edges did not change the outcome of the experiments so self-loops are included in the presented data. Including nodes with in-degree or out-degree of 0 also did not affect the outcome. Adjacency matrices for the datasets are shown in Figure 1. Message Passing Models In our models, time is discretized and message passing at each node is synchronous. During a time step, a node can be active or inactive (0 or 1). Each time step t, L new messages are injected into the system at randomly chosen nodes, representing load. Variable loads are to be expected in brain networks, but are most likely restricted to a relatively low range given metabolic constraints. Because the three connectomes have different numbers of nodes N, load is expressed as a percentage of the number of nodes in the network. We also report the case of 1 new message injected per time step in each case. We restrict simulations to the load regime with a number of messages less than or equal to 50% of N. During each time step, a node looks to all incoming messages, including messages that are already in the system and new messages that are scheduled to be injected; if there is more than one message coming in, all are deleted. The full set of operations in the model is given Emergent sparseness on the mammal connectome Adjacency matrices of the connectomes tested. Mouse includes the ipsilateral whole brain of the mouse (Oh et al., 2014), while monkey1 (Markov et al., 2014) and monkey2 (Bakker et al., 2012) include monkey cortex. All edges (blue dots) are directed and have weight 1. Nonzero (nz) entries are shown in numerical and percentage terms below each matrix. as pseudocode in Supporting Information Box 1. We are also making code to generate these models available on GitHub (Hao & Graham, 2020). Measures of Dynamic Network Activity Simulations consist of 500 runs of 1,000 time steps each. Data from the first 500 time steps is excluded to allow the analysis of equilibrium dynamics (see, e.g., Schruben & Margolin, 1978). However, we note that data for the full test run produces the same overall pattern of results. First, we measured the fraction of nodes active on a given time step. We measured both attempted activity before accounting for collisions, and actual (net) activity after collisions. Attempted activity: Network activity before destructive collisions are accounted for. Actual (net) activity: Network activity after destructive collisions are accounted for. We also measured the sparseness over time in five time step windows (bins) using the Treves-Rolls measure of sparseness (Treves & Rolls, 1991;Rolls & Tovee, 1995); recall that Treves-Rolls sparseness measure: A measure of the sparseness of a distribution with 0 corresponding to maximal sparseness and 1 corresponding to minimal sparseness. sparseness here relates to network activity rather than to network structure. The Treves-Rolls measure is sensitive to higher order statistical regularities in data (see Willmore & Tolhurst, 2001). We calculate population sparseness by constructing a histogram of the frequency of firing x n within a time window per node, then sum over i nodes. The Treves-Rolls measure S (Equation 1) divides the square of the sum of the distribution by the sum of the squares of the distribution: This measure ranges between maximal sparseness at N −1 if one unit is active per window and all other units are inactive; minimal sparseness is 1.0 for uniformly distributed activity (sparseness of standard normal distribution by this measure is 0.64). See Figure 2 for an illustration of this measure. Lower values of Treves-Rolls sparseness indicate more sparse activity. We note that tests of 10 time step windows for the sparseness measure produced similar results. We also calculated lifetime sparseness as above, summing over time instead of over nodes. Comparison to Randomized Networks To investigate the dependency of network dynamics on the specific wiring patterns of the mammal connectome, we compared network activity under our models to the same models Emergent sparseness on the mammal connectome implemented on randomized networks. Randomized networks have the same distribution of in-and out-degree as the empirical networks. This amounts to shuffling the in-and outgoing edges of nodes in a given network (Maslov & Sneppen, 2002). However, the nature of the connectomes in this study complicates the randomization process. When a network is dense (in a topological sense; except for the present discussion of randomization, all references to density and sparseness refer to network activity rather than topology), as in the mouse connectome and the 29-node core of monkey1, network topology changes little after randomization. We therefore thresholded the mouse connectome to make it less dense for randomization. Since edge weights in the mouse are known (corresponding to tracer volumes) we thresholded the network to exclude edges with a weight below 0.0136 (a value close to the modal weight, and one implied as meaningful by Oh et al., 2014), then randomized the resulting network. Thresholding of the mouse network reduced degree in a way that brought edge density to a level comparable to that of monkey2 (but interestingly, the global topology changed relatively little after removing more than half of the edges, suggesting self-similarity). The density problem is compounded in monkey1 because the full 91-node network is low density as compared to the other two networks. Therefore, we have omitted randomization of monkey1; results of randomized network comparison in the monkey are for the monkey2 connectome, which has intermediate connectivity. Note that each trial in the randomized network simulations corresponds to a different randomized network. Other randomization approaches, such as that of Tipnis et al. (2018), have used iterative search among Emergent sparseness on the mammal connectome randomized networks (based on the method of Maslov & Sneppen, 2002) to find those that differ most from the empirical network. However, this approach has potential for bias since only a single randomized network is generated, whereas our method compares the empirical network to a large family of randomizations. RESULTS Qualitative differences in network dynamics are apparent in visualizations of node activity over time. In Figure 3, we show attempted and actual (net) activity through a representative simulation of the mouse connectome, with nodes shown along the vertical axis and the last 500 time steps of the simulation running from left to right (L = 10%). Figure 3A shows attempted message passing per node according to the color scale shown (i.e., the number of messages colliding). In general, attempted messages are distributed in unimodal, monotonically decreasing fashion in RW, whereas attempted messages are bimodally distributed in IS with a high peak at zero (not shown). Figure 3B shows actual activity for the IS model (left panel) and the RW model (right panel). From inspection, the net activity is more "bursty" and sparse in IS while activity is more uniformly distributed across nodes and time in RW. We can quantify these observations by calculating the average net activity and the sparseness of activity across simulations. Figure 4 shows average net activity (expressed as a fraction of the number of nodes active in a given model, averaged over time and trials) as a function of load L (expressed as a percentage of nodes creating new messages on each time step, except for the left-most tick, which represents injection of one new message per time step). IS models are shown with solid lines and RW models are shown with dashed lines. Though RW models have lower net activity with L = 1 message, net activity increases rapidly and monotonically as load increases. The relationship is well fit by a log function (not shown). In contrast, IS models show lower activity that is relatively uniform across load. (triangles) networks. In IS, activity remains mostly constant across load and is lower compared to the corresponding value for RW at all load levels except 1 message per time step. Activity in RW increases by more than two octaves across loads tested. Shaded areas represent 2 standard deviations of variability from the mean across simulations (note that in some cases this dispersion is smaller than the width of the markers/lines). Data points show significant differences (t test, p < 0.01) for corresponding IS and RW measurements at all loads. Figure 5 shows population sparseness of activity as a function of load. As with net activity across nodes, sparseness of activity in a given network is greater (closer to zero) for IS compared to RW at all loads except the minimum load. As a function of load, we observed the same pattern for sparseness over time within a simulation (lifetime sparseness of activity) as was seen for population sparseness, namely that sparseness over time in a given network is greater (closer to zero) for IS compared to RW except at the lowest loads (see Supporting Information Figure S1). We note that, like attempted activity, net activity over time for IS is bimodally distributed with a high peak at zero, whereas net activity under RW is distributed in roughly Gaussian fashion (see Supporting Information Figure S2). Emergent sparseness on the mammal connectome Results of comparison to randomized networks ( Figure 6) in monkey2 and the thresholded mouse show that IS models have lower net activity and greater sparseness in the empirical network at almost all loads tested. RW models on empirical networks generally show very small decreases (monkey2) or slight increases in activity compared to randomized networks (thresholded mouse). More specifically, empirical monkey2 and thresholded mouse networks under IS differ significantly (t test, p < 0.01) at all loads compared to corresponding randomized networks. Under IS, the mean net activity of the empirical networks across loads is lower by 0.013 (mon-key2) and 0.0039 (thresholded mouse) activity units compared to corresponding randomized Figure 5. Population sparseness of activity as a function of load under IS (solid lines) and RW (dashed lines) models for mouse (circles), monkey1 (squares), and monkey2 (triangles) networks. As with net activity, IS models show relatively constant sparseness of activity across load, with greater sparseness of activity (closer to zero) than RW models at all loads except 1 message per time step. Sparseness in RW decreases by more than a factor of 2. Shaded areas represent 2 standard deviations of variability from the mean of population sparseness across simulations. Data points show significant differences (t test, p < 0.01) for corresponding IS and RW measurements at all loads. networks. In percentage terms, this equates to a difference in activity between empirical and randomized networks of 7.3% and 2.3%, respectively. The mean sparseness of the empirical network across loads under IS is greater (closer to zero) by 0.026 (monkey2) and 0.0092 (thresholded mouse) units compared to corresponding randomized networks. This equates to a difference in sparseness of 5.6% and 1.7%, respectively. Empirical monkey2 and thresholded mouse networks under RW also differ significantly (t test, p < 0.01) at all but 2 load values compared to corresponding randomized networks. However, compared to corresponding randomized networks, the mean net activity of the empirical networks across loads under RW is marginally lower by 0.00074 (monkey2) or greater by 0.0027 (thresholded mouse) activity units. This equates to a difference in net activity of 0.30% and 0.93%, respectively. Likewise, the mean sparseness of the empirical network across loads under RW is marginally greater by 0.0013 (monkey2) or lower (closer to one) by 0.0012 (mouse) units compared to corresponding randomized networks. This equates to a difference in sparseness of 0.25% and 0.027%, respectively. We note that the same pattern of results also obtains on the unthresholded mouse network. Although the differences between empirical and randomized networks are not large, they are uniformly significant. This evidence is consistent with the notion that empirical network topology in mammals promotes sparse, efficient activity under an information spreading routing strategy. In IS models, both the empirical monkey2 network (triangles) and the (thresholded) mouse network (circles) show lower activity and greater sparseness (showing a difference in percentage terms of between 2-7%) compared to corresponding randomized networks. Randomized RW models show largely the same behavior as the empirical networks (differing by less than 1%) in terms of activity and sparseness. Filled symbols indicate significant differences (t test, p < 0.01) between empirical and corresponding randomized networks, whereas open symbols indicate no significant difference. DISCUSSION In the context of collision interactions among signals, we used numerical simulations to show how the mammal brain can exploit mesoscale network structure by spreading information widely, and in particular by sending multiple copies of an incoming signal. Under an information spreading (IS) strategy for routing, the network achieves globally low net activity and high sparseness of activity like what is found in real mammal brains. Net activity and sparseness of activity change relatively little as load increases under the IS model. This result suggests a way that the brain could achieve ongoing sparse activity spread across the entire system using local protocol, but without deviating too far from globally homeostatic operation. In contrast, random walk (RW) models with the same collision rule produce substantially higher summed activity. In RW, substantial numbers of nodes are active in a given time window, resulting in less sparse activity. Net activity and sparseness of activity under RW change substantially as load increases over the same range, with activity increasing more than fourfold, and sparseness decreasing by around a factor of two. Our findings hold in both the monkey cortex and mouse whole brain. In addition, empirical networks under IS models show lower activity and greater sparseness of activity than randomized networks with the same degree distributions. Emergent sparseness on the mammal connectome Together, these results suggest that nonlinear collision protocol and mammal connectome topology could promote sparse, efficient routing of communication traffic across the mammal brain network. This protocol is biologically plausible, and does not require centralized control. However, one can ask: Are RW and IS models really comparable to each other? It is certainly the case that the actual number of messages circulating on the network is different under RW and IS models. For example, a single message injected as load will-on the next time step, and in the absence of incoming messages-lead to only one outgoing message under RW, while in IS it will lead to k outgoing messages (where k is out-degree) under these circumstances. Yet such nonlinear interactions are precisely our object of study. Rather than seeing RW models as a control relative to IS models, the two models should be seen as extreme ends of a spectrum of diffusion-like routing strategies with distributed protocol. This spectrum is orthogonal to the spectrum of pathfinding strategies that span from shortest paths to random walks (see Avena-Koenigsberger et al., 2019). At one end of the spectrum of routing strategies, we have the random walk approach that is employed extensively to model diffusion-like processes with a conservation law. At the other end of the spectrum are models that generate spreading dynamics (e.g., those used to model infectious disease). Random walk models are in any case very common in network neuroscience, so their collision dynamics are of inherent interest to the field. We have shown that RW models have clear limitations in terms of achieving sparse, low, and relatively uniform activity across loads given simple collision rules. Taking a wider perspective, the management of signals arising from different sources that impinge on the same target is a critical problem in network neuroscience. In the mesoscale mammal connectome, where all nodes are only a few hops from each other (and where signals are excitatory), the system would seem to need a strategy for managing coincident excitations in order to route signals appropriately. This conception stands in contrast to models of dynamics of the mesoscale connectome (particularly cortex) inspired by integrate-and-fire spiking neuron models, wherein the goal of the system is to have many excitatory signals arrive at the same time in order for the unit to be highly activated. Collisions-in the sense of multiple coincident efferents-are seen as signals to be integrated. However, studies such as Kaiser et al. (2007) and Kaiser and Hilgetag (2010) that approach dynamics from this perspective confirm the necessity of keeping global activity at a low and relatively constant level. Kaiser and colleagues tested a threshold routing model on surrogate hierarchical networks with biologically plausible parameters, and suggested that brain networks must seek regimes of "limited sustained activity." Using a "spreading" model that is somewhat similar to our IS model, their simulations indicated that specific parameters of activity likelihood promoted limited sustained activity of around 10-20% of nodes. However, many if not most parameter settings led to "dying-out" activity or overload (Kaiser et al., 2007;Kaiser & Hilgetag, 2010). We have shown here that limited sustained activity of around 10-20% of nodes is robustly achieved on mammal connectomes under a nonlinear routing approach employing destructive collisions and information spreading. However, in contrast to Kaiser and Hilgetag (2010), our model shows that this pattern of activity is distributed across the entire network over time, rather than concentrated in a minority of nodes, and this behavior is demonstrated on all simulation runs. In separate tests, we found that the structure of mammal brain networks is such that collisions that sum or threshold signals lead to very high activity and very low sparseness in both Emergent sparseness on the mammal connectome RW and IS models. This was suggested by tests of a "let one pass" collision rule whereby exactly one message in a collision of m messages is allowed to pass on a randomly chosen outgoing edge. Even at low load values, the network quickly overloads (especially under IS) with this collision rule producing data that are not meaningful, and therefore not reported here. Our results show that strategies with nonlinear, self-organizing mechanisms that align with basic physiology can generate network activity that offsets collisions, requires no central agent, operates sparsely in time and over nodes, and maintains relatively uniform global activity despite large increases in signal load. However, automatic deletion is not the only or necessarily the best solution for managing collisions. There are many other possible strategies and combinations of strategies that could be at play, including buffering (Mišić et al., 2014a), resending (Graham & Rockmore, 2011), and deflection routing (Fornito et al., 2016). We find some evidence for one alternative strategy, which is to reduce load almost to zero. In this regime, redundancy may not be needed because collisions are rare, and a given path is unlikely to be congested. This is suggested in the finding that RW models achieve lower net activity and higher sparseness of activity than IS models at the lowest load (1 message per time step) in all cases. It remains possible that the brain could adopt this kind of extremely sparse strategy (see, e.g., Ovsepian, 2015). However, such a system, being so close to floor values, would be restricted to low activity ranges and would be vulnerable to large changes in dynamics if load varies more than a small amount. In any case, if collisions are indeed very rare, this is also a major problem for models based on integrate-and-fire strategies and linear approaches. We note that redundancy in the brain such as correlated firing among neighboring neurons (see, e.g., Meytlis, Nichols & Nirenberg, 2012) is typically thought to be something that the system should minimize (Wainwright, 1999;Barlow, 2001). Following Shannon's theory of information, parallel communication channels should seek to reduce mutual information to theoretical minima. However, the brain's network topology is not parallel but rather small world-like (see, e.g., Achard et al., 2006; see also Knoblauch et al., 2016). In this situation, there is no straightforward way to apply Shannon's theory for arbitrary node-to-node communication (El Gamal & Kim, 2011). Our results suggest that redundancy of signals may allow brain networks to strike a dynamic balance between signal creation and destruction. However, copying all incoming signals has its drawbacks. More realistic solutions may be found between the extreme ends of the spectrum of routing strategies. For example, it may be more efficient to send redundant signals only on some fraction of outgoing edges. Tests exploring this parameter are underway; we predict a smooth transition in dynamics between the observed extremes of RW and IS conditions. In any case, the purpose of the present investigation is not to find the optimal solution for intrabrain communication but rather to address the question, how could a mammal brain efficiently route signals, given the likelihood of collisions, the need for distributed protocol, and a demand for low, sparse activity levels? Much as vision scientists have gained knowledge from asking what an efficient retinal code would be given the statistical regularities of the environment (e.g., Field, 1994;Graham & Field, 2009), we aim to advance network model building by asking what an efficient whole-brain routing protocol would be given its architecture and operational demands. Routing protocol: The rules or strategy employed by nodes to direct the interchange of signals across a network. We note that collisions cannot be ignored by assuming that signals arriving at a node at the same time do not interact. Of course, at the mesoscale, inputs arising in different nodes that travel to a target node do not necessarily synapse with the same population within the target node. But if there were dedicated links from sending nodes via a target node to destinations one synapse away, this would imply a different network topology (i.e., the target node would not be Emergent sparseness on the mammal connectome considered a node). If the brain performs dynamic routing at nodes-and functional demands appear to necessitate this (Olshausen et al., 1993;Graham & Rockmore, 2011)-collisions must be managed. Nevertheless, the brain probably does not annihilate all colliding signals at all nodes. The rules governing collisions may vary in time and space from those that are fully punitive for coincident signals to those that require greater coincidence. This is a limitation of our study. Collision rules are a dimension worth exploring but as an initial study, we adopted a simplified model that promotes interpretability. The difficulty of studying nonlinear interactions of brain network communication is due in part to the problem of simulating dynamics explicitly. The challenge lies in tracking every signal's path and interactions. RW models and shortest path models admit analytical solutions that capture the likelihood of a set of states of the network. But this is generally possible only when collisions and other nonlinear interactions are ignored. As a result, explicit models capable of accounting for collisions have been relatively little studied to date (see Graham & Hao, 2018). With further study of explicit models we may be able to understand fine-grained activity patterns such as regional differences and correlations in activity. However, though we were able to model signal interactions explicitly, the lack of tracking of messages is a limitation of the present study. For example, we would like to know how many messages are delivered, and how quickly. We have identified a way to infer message "age" in our simulations and will present results of this work in a subsequent report. This study has other limitations. One is the idiosyncrasies of the connectomes studied. For example, the CoCoMac network is known to be incomplete (Chen et al., 2020), and the data Markov et al. (2014) are limited by retrograde tracing procedures. In addition, we have not constrained or tailored activity in terms of functional demands. Finally, we are limited in terms of the interpretation of our network randomization results. As noted earlier, randomizing networks in which nodes have a degree that is a significant fraction of the number of nodes leads to random networks that share edges with their empirical counterparts. However, in preliminary studies of the monkey2 network, we found that a nondegree-preserving random network (an Erdõs-Rényi network with the same number of nodes and edges, but where no edges are shared between empirical and randomized networks) shows significantly higher activity and lower sparseness compared to empirical networks under both IS and RW. In fact, the difference between empirical and randomized networks is somewhat more pronounced compared to the degree-preserving case, indicating that degree distributions do play a role in generating low, sparse activity, but they do not fully explain this effect. We conclude that the mammal brain network appears capable of striking a balance between activity and collisions such that sparse activity is robustly maintained across a wide range of loads. At present we cannot say what properties of network structure are necessary for supporting this behavior. Indeed, it remains unknown whether the "creative destruction" we observe here is unique to brains or constitutes a more widespread behavior of complex networks. Nevertheless, we argue that some strategy for managing collisions is necessary, and whatever solution the mammal brain has adopted, it must generate low, stable net activity that is sparsely distributed across the network.
8,891.4
2020-01-01T00:00:00.000
[ "Biology", "Physics" ]
Neural Model Stealing Attack to Smart Mobile Device on Intelligent Medical Platform To date, the Medical Internet of Things (MIoT) technology has been recognized and widely applied due to its convenience and practicality. The MIoT enables the application of machine learning to predict diseases of various kinds automatically and accurately, assisting and facilitating effective and efficient medical treatment. However, the MIoT are vulnerable to cyberattacks which have been constantly advancing. In this paper, we establish a MIoT platform and demonstrate a scenario where a trained Convolutional Neural Network (CNN) model for predicting lung cancer complicated with pulmonary embolism can be attacked. First, we use CNN to build a model to predict lung cancer complicated with pulmonary embolism and obtain high detection accuracy. Then, we build a copycat model using only a small amount of data labeled by the target network, aiming to steal the established prediction model. Experimental results prove that the stolen model can also achieve a relatively high prediction outcome, revealing that the copycat network could successfully copy the prediction performance from the target network to a large extent. This also shows that such a prediction model deployed on MIoT devices can be stolen by attackers, and effective prevention strategies are open questions for researchers. Introduction The number of intelligent Medical Internet of Things (MIoT) deployed online has been constantly increasing, reaching 20.35 billion in 2017, and the estimated number will continually increase to 75.44 billion in the next decade [1]. Besides, according to the International Data Corporation (IDC), the last five years have witnessed a 17.0% annual growth rate in IoT spending from approximately $700 billion in 2015 to nearly $1.3 trillion in 2019 [2]. Among them, MIoT accounts for a large proportion. Tan and Varghese [3] pointed out that there is a huge potential for the application of IoT in the health industry. Nevertheless, practical constraints must be taken into consideration. Vicini et al. [4] presented an approach to combine vending machines with IoT technology to facilitate a healthy lifestyle. However, cyberattacks are not new to IoT, leading to terrible consequences [5,6]. Most of the MIoT are without any defense mechanism. With the widespread application of IoT devices, cyberattacks are also improving, posing a more severe threat to the secure operation not only of IoT devices but also of the entire cyberspace [7,8]. With an increasing number of IoT-related cyber incidents being reported, experts and researchers from the IoT industry and academia have been working to design secure systems and solutions to combat the attacks of various types [9,10]. Many researchers have devoted extensive efforts to ensuring MIoT security and privacy, providing practical guidance for MIoT security. Fu et al. [11] highlight both opportunities and possible threats that IoT faces in two important application scenarios-the home and hospital. Yang et al. [12] provide an extensive survey, presenting the classification of MIoT attacks from perspectives of MIoT security research, threats, and open issues. Boejen and Grau [13] have utilized Unmanned Aerial Vehicles (UAV) to launch an attack in a simulated smart hospital environment and compromise a small collection of wearable healthcare sensors. Sethuraman et al. [14] have proposed a new deep learning approach, DFEL, for real-time cyberattack detection in the IoT environment and presented the robustness of high accuracy and significant time savings. However, there are not many studies that investigate the attacks targeting the services deployed on the MIoT devices, particularly the MIoT-based AI services, for example, machine learning-based disease prediction/detection services. Unlike the model Mohan [15] has raised, using lightweight encryption and attribute-based authorization to protect the model, in our model when selecting the data set, we used the patient data in a specific area (Yunnan, Chongqing), which greatly reduced the risk of attacking the established network by exploiting the vulnerability of the data set. At the same time, we store the prediction model of lung cancer complicated with pulmonary embolism in the cloud to further protect our model with the protection measures provided by the cloud. In this paper, we study a scenario where a trained Convolutional Neural Network (CNN) [16] model for predicting lung cancer complicated with pulmonary embolism can be stolen by attackers. Specifically, we build a Copycat CNN [17] using only a small amount of data labeled by the original network, aiming to steal the established prediction model. We prove that the stolen model can successfully copy the prediction performance with a minor difference of approximately 3%. By doing this, a prediction model deployed on MIoT devices can be stolen by attackers. Overall, the contributions of our work are as follows: (1) Create a new platform of surgical IoT for cybersecurity study in high-performance medicine (2) Propose a model stealing attack on the intelligent medical platform (3) Implement and evaluate the proposed intelligent medical platform and model stealing attack This paper is organized as follows: In Section 2, we review the related works focusing on the cyberattacks using deep neural networks for the MIoT. The model stealing attack experiments are designed in the methodology part which is presented in Section 3. In the next section, the evaluation of the attack scheme on the medical platform was demonstrated and discussed. In the last section, we summarize the results and conclude this paper. Related Work 2.1. VR for MIoT. The IoT application has been widely used in the medical industry. In recent years, it has become widespread to combine Virtual Reality (VR) technology with medical-related majors. The integration of the Internet of Things and VR technology in the education field can enable learners to combine their conceptual learning with practical experience in a novel way [18]. Coogan and He use Unity Software, combined with a brain-computer interface, to control the VR environment and MIoT devices [19]. To make the operation of the entire medical platform more transparent, we adopted the combination of VR technology and MIoT to correctly reproduce the prediction process of lung cancer complicated with pulmonary embolism through the medical platform. 2.2. Cyberattacks with Deep Neural Networks. Because the medical concept of the Internet of Things is based on the concept of the Internet of Things, we should also understand the concept of the Internet of Things which was put forward in 1995 by Bill Gates in The Road Ahead and in 1999 by Auto-ID who first proposed the "Internet of Things," after the Internet of Things in various fields had a corresponding application, including the medical field. In 2013, Hu and his team [20] had believed that based on the support and guarantee of the powerful Internet of Things technology, the personal networking platform in the medical field will have a strong background shortly. This becomes reality, in 2018, when Jagadeeswari et al. [21] proposed a healthcare monitoring system based on big data training on a powerful computing platform. This has proven that the Medical Internet of Things has become a reality. In 2020, due to more and more cyberattacks, Flynn et al. [22] provided a proof of concept that the MIoT device and its accompanying smartphone app are vulnerable to attacks. A recent survey on Android malware detection is provided in [23]. This provides a certain theoretical basis for our attack model. The emerging deep learning techniques have shown impressive performance in various fields, from tasks like speech and object recognition to natural language processing (NLP), and even to cybersecurity tasks such as bug and vulnerability detection [24,25]. Nevertheless, the deep learning technologies can easily be fooled by crafted adversarial examples, which have brought considerable attention since 2014 when Szegedy et al. [26] and follow-up studies [27,28] showed that imperceptibly perturbed input images could successfully fool deep networks. Subsequently, Dalvi et al. [29] and Lowd and Meek [30,31] investigate the carefully crafted adversarial samples which can fool linear classifiers in the context of spam email detection. In 2006, Barreno et al. [32] pointed out that machine learning algorithms can be targets of a malicious adversary, and deep learning algorithms are no exception. When it comes to the investigation of attacks to deep models using grey-box models, Papernot et al. [33] applied a greybox target deep neural network (DNN) using the MNIST database. They use crafted adversarial samples against the target DNN, aiming to craft adversarial examples by approximating the decision boundaries of the target DNN. Subsequently, Bapiyev et al. [34] have demonstrated that one of the most promising approaches to the development of detection systems of network cyberattacks improved their software by application of modern models based on deep neural networks. And the results of model testing have shown that the accuracy of the basic variant is comparable 2 Wireless Communications and Mobile Computing with the accuracy of modern detection systems of network cyberattacks. Table 1 shows different D, x, f , ω in different formula expressions, which represents different levels of knowledge of the attacker. Compared with white-box attacks, grey-box attacks show differences in enumerated expression D and trained parameters/hyperparameters ω, which are understood in the literature as unknown parameters. It can be concluded from the formula of a black-box attack that we do not know everything about the original network when carrying out the black-box attack. In our attack network, a grey-box attack is adopted. Based on the same data set selection interval, relatively reasonable data labels can be obtained by doing so while ensuring accuracy. In this paper, we examine a copy attack using a CNN (which we call a copycat network, a grey-box attack) to copy information from another CNN (the target network) in a disease prediction scenario. By leveraging a small number of data labeled the target network, the copycat network could obtain similar performance compared with the target network, showing that the MIoT-based prediction model is vulnerable to adversarial attacks. New Platform for Mobile and Intelligent Medicine 3.1. MIoT System Design. Unity Software is a multiplatform integrated game development tool that allows players to easily create interactive content such as 3D video games, architectural visualization, and real-time 3D animation. This is a fully integrated professional game engine. The core code of the Unity engine itself is written in the underlying language C/C++. The image, sound, and physics engines are all compiled in C++. The dynamic link library DLL file encapsulates a series of methods and classes. C#, Python, and other programs call corresponding methods and classes through DLL files to build the game flexibly and with superior performance. Unity can run across platforms, such as Android, IOS, PC, and Web. This article is for the Android platform. Unity will publish the APK file of the VR project to the Android device and then display it through the headset. Unity will publish the APK file of the VR project to the VR headset and display this scene. The VR headset uses Pico G2 (Beijing Bird-Watch Technology Co., Ltd.) mobile VR headset, which has a field of view of 101°, refresh rate of 90 Hz, and resolution of 3K, providing the wearer with immersive medical VR application scenes ( Figure 1). As shown in Figure 1, the whole MIoT system consists of two parts: The left part is the construction of a threedimensional lung model, in which three-dimensional voxel segmentation was performed on CT images of patients (lung cancer with pulmonary embolism), and the lesions were marked. The right part processes the patient's textual data and uses LSTM and RNN deep learning model algorithms to predict and classify the data, respectively. A safety module is then added to make up the MIoT system (Visual-Haptic Navigation System). A Deep Neural Model for PE&LC Prediction. In this part, we use a CNN to perform the prediction of lung cancer with pulmonary embolism (LC&PE). As we can see from Figure 2, our CNN-Net architecture contains two 1D convolution layers and two fullconnection layers and connects to a sigmoid activation layer. Every 1D convolution layer is equipped with a kernel the size of which is 3, followed by a LeakyReLU activation layer and a max pool layer with a stride of 2 to downsample the text. Between two full-connection layers (one has the input size of 320 and the output size of 120; another one has the input size of 120 and the output size of 2), there is a LeakyReLU activation layer. Finally, we use a sigmoid neuron as a classifier. We use the convolution layer to extract features from the data. The output value of the layer with input size ðN, C in , LÞ and output ðN, C out , LÞ can be precisely described as where N is the batch size, C denotes the number of channels, and L is a length of the signal sequence. When groups = in channels and out channels = k * in channels , where k is a positive integer. This kind of operation is also called deep convolution in the literature. For an input of size ðN, C in , L in Þ, depth convolution with depth multiplier can be constructed by parameters 4. Model Stealing Attack to the New Platform 4.1. Overview of the Threat Model. As we can see (Figure 3), the MIoT structure consists of three layers (the perception layer, the network layer, and the application layer). Healthcare data with a variety of devices have been mainly collected in the perception layer. The network layer is composed of a wireless system, which processes and transmits the input obtained by the perception layer with the support of the According to the actual situation and service needs of the target population, the medical information resources are integrated at the application layer to provide personalized medical services to meet the needs of end users. Dividing MIoT into these three levels enables a more thorough analysis of where the network is at risk. In the perception layer, Wang et al. put forward the concept of the input formed by applying small but intentionally worst-case perturbations to examples in the data set; by doing this, they can output an incorrect answer with high confidence [10]. In the network layer, we can steal the model already trained by others for higher business value, which can greatly reduce the investment in the early stage of research and development and obtain higher profits. Theoretical Description of the Model Stealing Attack. In this part, we will introduce how to build our imitation network (copycat network) using data stolen from an existing target network (CNN in this case). The whole process of stealing is mainly to use random natural data to steal a network of imitators from the existing target network. It mainly includes two steps, creating pseudo training data and training a network of imitators. In the first step, a target network is used as a grey box to mark random natural data to generate a pseudo data set. Then, this pseudo data set is used to train an imitation network to replicate the property of the target network. A data set is needed to train the imitation network ( Figure 4). We recommend using pseudo data sets extracted from the target network (including text data related to or Wireless Communications and Mobile Computing not related to the problem domain (PD)). Therefore, the pseudo data set is completely different from the original data set. When performing a steal operation, the target network receives text data as input and affords class tags as output. The data set can be composed of the same PD as the target network, or it can be composed of random natural text data. First, we assume that the attacker has text data in the same PD as the training target network. Second, we suppose that the attacker can only access publicly available large-scale data sets, but in our research, the original labels are considered irrelevant. When automatically labeling these data sets (PD and/or nonproblem domain (NPD)), the target network is used by the attacker. Another type of network can be trained with labeled pseudo data sets, hoping to capture the nuances of the characteristic regions, to achieve property close to the target network. Achieving this hypothesis is mainly based on adding imperceptible noise to the input text data of CNN to obtain an answer from the network in a certain direction. The NPD can be achieved from the Internet for free. Then, when disposing of small databases (for example, PD data sets), the data expansion process can help increase the size of the database to obtain better results. Once a pseudo data set is obtained, the simulation network can start training. Firstly, a model architecture must be chosen as an attacker to mimic. Note that the attacker performing the replication may not know the target network's model architecture, but it makes no difference. We use a well-known architecture (CNN architecture) to compare with the original network. CNN is created for classification, so its output layer can be set according to specific problems. For the attacker, this may also be the case of the chosen architecture, i.e., imitating the target network. So, the output of the selected model must be adapted to the target network's PD; the output number of the replicator must match the number of classes processed by the derivation of the target network. The purpose of this simulated network is to evaluate whether the proposed method can replicate the target model with a small set of text data set in the same PD. In this case, we assume that the attacker can access a small amount of data in the same domain but without labels. Therefore, the samples of this data set contain text data set of the same PD as the original data set but are marked by the target network. The transferability of adversarial samples is accurately defined. We suppose an opponent is interested in producing a misclassified adversarial sample x * ! that is different from the class assigned to the legal input x ! by the model. This can be achieved by solving the following optimization problem: To mislead the sample x * ! , the model f was calculated deliberately. However, as mentioned earlier, such adversarial samples are often misclassified by models f ′ other than f in practice. To facilitate discussion, we formalize the concept of transferability of adversarial samples as The set X represents the expected input distribution of the tasks solved by model f and model f ′. We divide the adversarial sample transferability into two variables to characterize the pair of models ð f , f ′Þ. First is the transferability within technology, which defines transferability between training models of the same machine learning technology with different parameter initializations or data sets (for example, f and f ′ are both neural networks or both decision trees). Second is crosstechnology transferability, which considers using models trained by two technologies (for example, f is a neural network and f ′ is a decision tree). Discussion on the Specific Medical Scenario and the Attack. Lung cancer with pulmonary embolism accounts for a large proportion of medical mortality, a large part of which is due to errors in the diagnosis of patients with lung cancer with pulmonary embolism. Our system, after several training steps, can predict accurately whether a lung cancer patient will have pulmonary embolism at the same time. This would allow doctors to have an accurate diagnosis of the patient and develop a suitable plan to reduce the mortality rate. The system is of great value both medically and economically. However, this system can be vulnerable to attacks. The attack we designed was to steal a trained model. In today's increasingly important intellectual property, attacks of such kind can severely damage the profit of the model owner, causing the leak of patients' privacy. In this paper, we implement a copycat model to steal a trained model for predicting lung cancer with a pulmonary embolism network and demonstrate the feasibility of successfully copying the performance of a trained model. The data set we use consists of 179 lung cancer patients with pulmonary embolism, 1372 lung cancer patients without pulmonary embolism, and 71 samples randomly collected from natural data which have been used to create the original data set (the size of which is 1622). Among the total Wireless Communications and Mobile Computing number of 1622 patient samples, 60% of the samples were used as a training set and 40% as a test set. As a result, our system predicted lung cancer with pulmonary embolism with a precision of 79.43%. Implementation of the Platform and the Attack. Unity's release of the VR project to the Android platform process is shown in Figure 5. As shown in Figure 5, the overall display is the process of a copycat model attacking the medical prediction model of lung cancer with pulmonary embolism. In this process, the copycat model plays the role of a thief. The prediction model of lung cancer with pulmonary embolism established by us is stored in the cloud. First, we determine the network model used by a copycat, build the model through code compilation software, and then reuse the input following the original model input requirements of the data set, stealing useful labels for us to use to generate the copycat network. To make the whole prediction result more convenient for observation, we used the Unity 3D platform for 3D modelling to generate a 3D lung. First, we used the code to isolate the lesion area in the CT image of the patient and generated a file in the form of OBJ, which was imported into the Unity 3D platform for modelling. The upper part of the figure shows the 3D modelling process. In contrast, the lower part shows the whole process of the copycat model attacking the prediction model of cloud lung cancer combined with pulmonary embolism. The whole framework shows the process of the copycat model attacking MIoT. The prepared data set has been imported into the target network stored in the cloud; at the same time, the label corresponding to our data set is also output together. The network we selected was trained through data sets and stolen tags. During the training, the parameters and hyperparameters in the network were constantly fine-tuned so that the copycat network and the target network were continuously fitted to achieve similar effects, which meant that our attack was successful. Performance of Intelligent Medical Platform. We use the confusion matrix as the evaluation standard of the intelligent medical platform. In the prediction analysis, the confusion table, sometimes called a confusion matrix, is a two-row, Figure 4: On the left side, the target network is trained by an original data set and is available as an API, input text data, and output class labels. The right side shows the process of obtaining stolen tags and creating pseudo data sets: sending a random natural text data set to the API to obtain tags. Then, this pseudo data is used set to train the imitation network. In the predictive classification model, the quantity of TP and TN is large, while the quantity of FP and FN is small, which means the prediction accuracy is higher (which can be seen from Figure 6). However, what is counted in the confusion matrix is the number. Sometimes, faced with a large amount of data, it is difficult to measure the number of models by counting. Therefore, the confusion matrix is an extension of the secondary and tertiary indicators in the basic statistical results (obtained by adding, subtracting, multiplying, and dividing the lowest indicators). Therefore, after we obtain the confounding matrix of lung cancer with pulmonary embolism, we need to see how many observed values correspond to the second and fourth quadrants, where the value (267 + 31 = 298) takes up a large proportion in the total (311), which means that our prediction model is effective. Macro average means to average the recall of class 1 and the recall of class 0. The weighted average is calculated using the proportion of samples as the weight. From the table above, our model has high prediction accuracy. From Table 2, we can see that our model has achieved a very high precision. Effectiveness of Model Stealing Attack. We trained a CNN to predict LC&PE, using an adaptive learning rate of 1e-4, which is then reduced based on the smooth behavior of the verification loss. Other hyperparameters include the batch size of 8, the number of instances (T) set to 200 (unless otherwise specified), the Adam optimizer with a weight of 0.01, As can be seen, Table 3 lists the performance metrics of the Copycat CNN and Table 4 lists the absolute values indicating the performance difference variation between the original network and the imitator network after training. Combined with the data in Tables 3 and 4 Wireless Communications and Mobile Computing the copycat model can achieve high accuracy in stealing the prediction model of lung cancer with pulmonary embolism, which is almost the same. And from the figure, we can see that Figure 7 describes the absolute value of the difference between the original network and the imitator network after training, and in terms of the precision/recall, the performance variations between the Copycat CNN and the original network range from 2.6% to 0.3%. Figure 8 shows the ROC curve about LC with PE and LC without PE in Chongqing, Yunnan. Almost the same bar chart and ROC curve close to 1 prove that the copycat network built by us is a model with functions close to the original network with facts. Above, the performance difference between the network stolen from the target medical platform model through the copycat model and the original network is not evident. This means that we can successfully use deep learning models to steal the target network with a small amount of labeled data. Through the comparison of the data in the experiments we obtained, we can see that the copycat is generally low in various scales with the original network, which, in the prediction accuracy of lung cancer and f1 appeared on the score difference of 0, shows that we can steal out of the network and the gap with the original network has become very small, thus proving that our guess is correct. We may conclude that the prediction results of the copycat model are 99% identical to those of the original model. Conclusions In this paper, we establish a new platform based on surgical IoT for cybersecurity study. On the established intelligent medical platform, we propose a CNN for lung cancer with pulmonary embolism prediction. To demonstrate the attack to an established model on the surgical IoT platform, we implemented a random selection model that mimics CNN training using a small number of labeled samples. Experimental results show that the replication model can successfully replicate the performance of the target CNN, achieving minor performance variance (less than 3%). The success of the attack shows that intellectual property such as the trained AI model using private and sensitive information can be stolen. How to effectively prevent attacks of such kind from happening is an open question for researchers from the fields of cybersecurity, MIoT, and deep learning. Data Availability The data supporting the results of this study can be obtained from the corresponding author.
6,337.8
2020-11-26T00:00:00.000
[ "Computer Science" ]
Extended Poisson Inverse Weibull Distribution : Theoretical and Computational Aspects Extended Poisson Inverse Weibull Distribution: Theoretical and Computational Aspects Saeed Al-mualim1,2 1 Management Information System Department, Taibah University, Saudi Arabia 2 Department of Statistics, Sana’a University, Yemen Correspondence: Saeed Al-mualim, Management Information System Department, Taibah University, Saudi Arabia; Department of Statistics, Sana’a University, Yemen. E-mail: s<EMAIL_ADDRESS> Introduction A certain continuous random variable (rv) Z is said to have the Inverse Weibull distribution (IWD) with scale parameter δb β −1 > 0 and and shape parameter β > 0 if its probability density function (PDF) is given by ) . In this work we shall propose a new version (generalization) of the the Topp Leone Inverse Weibull distribution (TL-IWD) via using the discrete zero truncated Poisson distribution (ZTPD).The PDF and CDF of TL-IWD are given by f [θ,b,β,δ] and respectively, where β, b, and θ > 0 are shape parameters.The probability mass function (PMF) of ZTPD is given by If we have a system has N subsystems functioning independently at a given time t where N has ZTPD with expected value E(N) and variance Var(N|α) are, respectively, given by Assume that the failure time of each subsystem has the TLEIW model defined by PDF and CDF in (1) and (2).Let Y i denote the failure time of the i th subsystem and let then the conditional CDF (CCDF) of X given N is then, the marginal CDF (MCDF) of X is will be equation ( 5) is called the CDF of the PTLIWD, where Φ=α, θ, b, β, δ.The corresponding PDF of (5) reduces to Some useful extensions of the IWD can be found in the statistical literature and cited such as the beta IW distribution (B-IWD) (see Barreto-Souza et al. (2011)) , the Marshall-Olkin IW distribution (MO-IWD) (see Krishna et.al. (2013)), the transmuted IW distribution (T-IWD) (see Mahmoud and Mandouh (2013)), the transmuted exponentiated IW distribution (TE-IWD) (see Elbatal et al. (2014)), the transmuted Marshall-Olkin IW distribution (TMO-IWD) (see Afify et al. (2015)), the transmuted exponentiated generated IW distribution (TEG-IWD) (see Yousof et al. (2015)), the beta expo- Expanding A i via the power series, we have inserting ( 7) into (6), we get Consider the power series Using (9) we get where and is the IWD with scale parameter δ {[(1 + τ) θ + w] b} β −1 and shape parameter β and is the IWD with scale parameter δ {[(1 + τ) θ + w + 1] b} β −1 and shape parameter β.Via integrating (10), we get ] , Vol. 8, No. 2; 2019 and shape parameter β. The PTLIWD is a suitable for fitting the unimodal and right skewed data sets (see Figure 1(left panel)).The PTLIWD provide adequate fits as compared to other IWDs in both applications with smallest values of AI C and BI C . Moments The r th non central moment of X is given by then we have where Setting r = 1, 2, 3 and 4 in (11), we have which is the mean of X, and Incomplete Moments (IM) The Then where Probability Weighted Moments (PWMs) The (s, r)th PWM of a rv X following the PTLIWD is derived by Using ( 5) and ( 6), we have ] , where ) . The (s, r)th PWM will be Residual Life and Reversed Residual Life Functions (MRL) & (MRRL) The n th MRL given as Then th MRL of X can derived as where The n th MRRL given as X≤t,n=1,2,... ] we obtain Then, the n th MRRL can written as where q τ,w = a τ,w (−1) r t n−r and q ⋆ τ,w = a ⋆ τ,w (−1) r t n−r . Order Statistics The PDF of i th order statistic, say X i:n , can be where B(i, n − i + 1) is the beta function.using ( 5), ( 6) and ( 13) we get where and The PDF of X i:n will be The moments of X i:n will be Maximum Likelihood Estimation The log-likelihood function The above ℓ(Φ) can be maximized numerically by using R (optim).The components of the score vector where Conclusions A new extension of the Poisson Inverse Weibull distribution is derived and studied in details.Number of structural mathematical properties are derived.We used the well-known maximum likelihood method for estimating the model parameters.The new model is applied for modeling some real data sets to prove its importance and flexibility empirically.We also conclude that: 1-The PTLIWD provide sufficient fits as compared to other IW extensions with the smallest values of AI C and BI C in both applications.From application 1, the PTLIWD is much better than the MO-IWD, B-IWD, Kum-IWD, MO-IRD, MO-IED, E-IWD and IWD. Figure 2 . Figure 2. TTT plots for 1 st data set Figure 3 . Figure 3. Fitted PDF, PP Plot, and Kaplan-Meier survival plot and estimated HRF for the 1 st data Figure 5 . Figure 5. Fitted PDF, PP Plot, and Kaplan-Meier survival plot and estimated HRF for the 2 nd data Table 1 . The AI C and BI C statistics for the survival times for Guinea pigs Table 3 . The AI C and BI C statistics for the repair times data Table 4 . MLEs and their standard errors (in parentheses) for the repair times
1,237.6
2019-02-19T00:00:00.000
[ "Mathematics" ]
Above-water reflectance for the evaluation of adjacency effects in Earth observation data: initial results and methods comparison for near-coastal waters in the Western Channel, UK Un-supervised hyperspectral remote-sensing reflectance data ( < 15 km from the shore) were collected from a moving research vessel. Two different processing methods were compared. The results were similar to concurrent Aqua-MODIS and Suomi-NPP-VIIRS satellite data INTRODUCTION The use of near-coastal (less than 15 km from the shore) satellite ocean colour data is limited by difficulties with: adjacency effects [1], atmospheric correction [2] and in-water optical complexity [3].A large effort in the scientific community has been dedicated to the development of atmospheric correction models [4]- [7] and algorithms for separating the signal from optically active components in coastal waters [8]- [11].Comparatively, less effort has been dedicated to the investigation of the adjacency effect [12], which is critical in near-coastal imagery.Conceptually, the adjacency effect is due to the presence of a scattering atmosphere over a reflecting surface that is nonuniform.This causes the radiance from high reflectivity areas to "spill" over the neighbouring low reflectivity areas, increasing their apparent brightness and modifying observed spectral properties.With the increased current and planned use of satellites for water quality assessment and a profound interest in biogeochemical processes and water quality in the nearshore environment, there is a need to refine remotely sensed product quality in the near-coastal area, and therefore to better understand the adjacency effect. In order to improve our knowledge of coastal adjacency effects, in-situ remote sensing reflectance (R rs ) data would ideally be gathered from transects perpendicular to the coastline.In addition, to be able to test adjacency effect algorithms, the in-situ data would have to be matched with concurrent satellite imagery.Unfortunately, such data collection implies uncertain planning and examples in the literature are scarce.R rs spectra were measured over very turbid waters off the Belgian coast [13].Alternatively, a modelling approach has been proposed in the absence of in-situ data of those characteristics.This approach used a 3D backward Monte Carlo code and Finite Elements Method plane-parallel radiative transfer codes to simulate adjacency effects near the coast [14].Having to resort to a modelled dataset highlights the need for a dataset collected specifically with the purpose of testing adjacency effect correction schemes. The present work reports on in-situ un-supervised R rs measurements from above water hyperspectral radiometers.Uncertainty in above-water radiometry arises from: instrument calibration and performance, correction for air-sea interface reflection and optical changes of the water related to the measurement platform (like ship perturbation of the light field) [15,16].In this work we specifically quantify the discrepancies arising from the air-sea interface reflection correction parts and the influences on derived R rs .We do so by processing the same data with two methods: the similarity spectra [17] and the fingerprint approach [18].Preliminary results on the comparison of in-situ R rs data with ocean colour images using standard processing algorithms are also presented.The concurrent in-situ and satellite observations will be useful to the scientific community developing corrections of the adjacency effect in current and upcoming sensors. METHODS We designed a sampling strategy to collect suitable datasets for testing algorithms in coastal waters off Plymouth (UK).These are land influenced and yellow substance dominated waters, while relatively clear of particles (chlorophyll con- In-situ sampling took place on the RV Plymouth Quest (5 th -7 th Sept. 2012), during a quasi-perpendicular transect to the coast off Plymouth, while the vessel carried out other routine tasks.Only data from the 5 th September between 09:22 a.m. and 11:30 a.m.GMT were co-incident with clear satellite imagery (Figure 1).Un-supervised sampling of above water radiometric quantities was performed using a hyperspectral HyperSAS system (Satlantic Inc.Halifax, Canada), composed of three sensors simultaneously measuring downwelling irradiance (E d ), sky radiance (L s ) and water leaving radiance (L t ).This system also included a Satlantic tilt, heading and roll sensor (THR) and GPS.The three optical sensors were mounted on a pole on the bow of the vessel at 5 m above the water surface.L t was measured pointing to the water surface with an angle of ∼40 • from nadir (i.e.viewing zenith angle, θ v ) and the crew was instructed to measure away from the sun (viewing azimuth angle, φ v ) with an angle ∼135 • [21], whenever possible during other routine operations.The sensors collected data semi-continuously with 3.3 nm spectral resolution between 350 and 800 nm, with a scanning frequency between 4 and 0.5 Hz, depending on the sensor optics.The optical data were converted to physical units and processed to Level3a using the manufacturers software (Prosoft v7.7.16) which merged the data to 1 Hz.The calibration of the optical instruments had been done by the manufacturer previous to the deployment (April 2012).The optical sensors have a nominal calibration uncertainty of 3% (Satlantic Inc., personal communication). We used wind speed and an index of the cloud coverage changes (i.e.πL s (400)/E d (400)) to characterise measurement conditions. Data processing We used two different approaches to compute the air-sea interface reflection (ρ sky ) and to correct for sun-glint contamination.The first approach (i.e.similarity spectra) [17] expresses ρ sky as a function of wind speed, derived from Hydrolight computations [21].This includes a switch to an overcast sky model when L s (750)/E d (750) > 0.05, when ρ sky is assigned a constant value of 0.0256.This approach is also used to calculate the remote sensing reflectance (R rs ), based on the observation that R rs in the NIR has a constant shape in moderately turbid to turbid waters.The method was originally developed using model RAMSES sensors from TriOS Optical Systems (Rastede, Germany).We applied the method to the Satlantic HyperSAS instrument.A major difference between the two instruments was their spectral range: 320 to 950 nm for the TriOS and 350 to 800 for HyperSAS.This difference caused our choice of reference wavelengths for the similarity spectra to be λ 1 = 720 nm and λ 2 = 780 nm, as opposed to 779 and 865 nm from the original publication [17]. The second approach used in this work was developed for unsupervised sampling on ferries in the Baltic Sea [18] (i.e.fingerprint approach).This spectral optimization method retrieves ρ sky by minimizing the propagation of atmospheric absorption features to R rs .The model provided flags for values of ρ sky too low (lower than 0.0240), too high (when ρ sky yields R rs = 0 in any waveband between 375 and 800 nm) or suspect (when ρ sky yields negative R rs in the same spectral range). Statistics We follow the convention of Hooker et al. [22] and use an unbiased parameter, because we do not assume either of the processing methods to be more correct than the other.Comparison between the two processing methods was quantified using the unbiased percent difference (UPD) computed as: where X is the variable being compared for a given spectra, i, the superscript F is for the fingerprint approach and S is for the similarity spectra. Stable meteorological conditions prevailed during the week when sampling was performed.On the sampling day, relatively high median wind speed of 11±2 m/s was recorded together with clear sky and stable cloud cover conditions (Figure 2).It can be seen how the most stable φ v were obtained between 9:35 and 10:30 (Figure 2(c)). Remote sensing data The remote sensing data used in this study have been obtained from two different sensors: the Moderate Resolution Imaging Spectroradiometer (MODIS) carried on board of NASA's satellite Aqua, and the Visible Infrared Imaging Radiometer Suite (VIIRS), on board of NOAA's Suomi NPP satellite.MODIS L1A data were downloaded from the Ocean Color Web Page (http://oceancolor.gsfc.nasa.gov).These data were processed by SeaDAS (http://seadas.gsfc.nasa.gov/) to generate the L2 geolocated R rs data, masking out the land and clouds.We used MODIS images at 500 m spatial resolution.VIIRS L2 data were also downloaded from the Ocean Color Web Page.These data were generated by NASA experimental processing.VIIRS images have a resolution of 740 m.Data corresponding to the transect from offshore to the coastline was were extracted from the Remote Sensing images using the BEAM Toolbox (http://www.brockmann-consult.de/cms/web/beam/) to compare with the in-situ measurements (Figure 1). RESULTS AND DISCUSSION All ρ sky values obtained from the similarity spectra and the fingerprint approach are shown in Figure 2(c).When the similarity spectra data filtering criteria were used to filter the spectra [23] for the whole sampling period (i.e.9:22 to 11:30), the number of valid data was reduced from 2607 to 279 (10.7%).On the other hand, when the fingerprint approach was used, the percentage of valid ρ sky was 28.8%.The remaining ρ sky values were flagged as too low (29.4%),too high (41.3%)and suspect (0.5%).The intersection between the acceptable spectra for the similarity spectra and valid data from the fingerprint approach yielded 84 spectra (3.2%), which were used for the comparison hereafter.Median±68 th percentiles of ρ sky for the similarity spectra and fingerprint approach were respectively: 0.0345±0.003and 0.0384±0.015.The average UPD was 26% and no significant correlation between UPD and φ v was found.However, selecting only data from the period with stable φ v (i.e.9:35 to 10:30, N = 33, or 1.3% of 2607), reduced the average UPD to 20%. The discrepancy in ρ sky translated into spectral R rs .Average UPD was higher for the blue bands than for green: 45.3% at 412 nm and 18.5% at 555 nm.The mean spectral UPD for the most common bands in that interval (i.e.412, 442, 488 and 555) was 27.9%.A similar comparison between other methods for processing above water radiometry [22] produced much lower mean UPD values for clear sky days (e.g.average spectral UPD was 1.2%).The fingerprint approach would benefit from inclusion of spectral radiance in the 320-400 nm domain since there are some major gas absorption features there.This may also explain to some extent why blue bands show larger discrepancies.Further testing of this hypothesis is needed by extending the HyperSAS spectral range to wavelengths below the current 350 nm limit. Overall agreement between in-situ (not-normalised) R rs and satellite R rs can be observed (Figure 3), with VIIRS data being closer to in-situ observations than MODIS for the whole transect.There is an increase in R rs near to the coast in the in-situ data and VIIRS, but not valid data in the MODIS 500 m image.This highlights the potential for the use of coupled satellitesnear shore in-situ data to improve the correction of the adjacency effect in coastal waters. CONCLUSIONS AND OUTLOOK We have presented initial results from a sampling designed to collect in-situ above water reflectance data to initiate the creation of a dataset suitable to evaluate adjacency effect models in coastal waters.In order to gain confidence in this dataset, we have explored two sources of uncertainty: viewing azimuth sampling and data processing. This study has concluded that maintaining constant viewing angles reduced the differences between two processing methods to 20%.Correct and constant viewing angles also provided, qualitatively, better agreement with concurrent MODIS and VIIIRS (experimental) full resolution satellite data.In the case of un-supervised sampling, this argument supports the use of automatic azimuth adjusting platforms such as the R-Flex system [24].Concerning the data processing, two recent methods have been used in this study giving a large discrepancy.Sources of this discrepancy could be the differences in spectral range between the instruments used in our study and those originally used in the development of the fingerprint approach.This hypothesis needs to be tested in the future.Because the Plymouth coastal area has a low sediment load, the present comparison of methods could also be extended to include additional methods for the air-sea interface correction in use for clear waters [25] and other recent processing methods for coastal waters [26]. The viewing geometry of above water sensors and the processing methods used are just two of the sources of uncertainty in the correction for air-sea interface reflection.Other sources of error need to be further investigated (i.e.wave slope statistics, integration time, sky radiance distribution, instrument deployment from a moving platform) [15,16].A more complete uncertainty analysis is therefore required as well as cross comparisons with similar measurements in the area by other teams [17]. Through a better characterisation of the uncertainties, we expect to provide robust and useful datasets for the study of the adjacency effect on Earth observation images of coastal areas. FIG. 3 FIG. 3 Comparison of in-situ Hypersas with satellite data.In-situ data at 555 nm were processed with similarity and fingerprint methods.Blue area highlights the difference between the two processing methods, as a measure of the uncertainty on the in-situ data.Green solid line are satellite data: a) VIIRS 551 band (750 m spatial resolution).b) MODIS-A 555 band (500 m and 1 km resolution).Arrows mark the approximate distance to the shore in the grey scale area (darker grey is nearer to shore, i.e. ∼50.35 • N)
2,968
2013-09-03T00:00:00.000
[ "Environmental Science", "Mathematics" ]
Optimal joint measurements of complementary observables by a single trapped ion The uncertainty relations, pioneered by Werner Heisenberg nearly 90 years ago, set a fundamental limitation on the joint measurability of complementary observables. This limitation has long been a subject of debate, which has been reignited recently due to new proposed forms of measurement uncertainty relations. The present work is associated with a new error trade-off relation for compatible observables approximating two incompatible observables, in keeping with the spirit of Heisenberg's original ideas of 1927. We report the first \textsl{direct} test and confirmation of the tight bounds prescribed by such an error trade-off relation, based on an experimental realisation of optimal joint measurements of complementary observables using a single ultracold $^{40}Ca^{+}$ ion trapped in a harmonic potential. Our work provides a prototypical determination of ultimate joint measurement error bounds with potential applications in quantum information science for high-precision measurement and information security. Introduction Quantum measurement, whilst being fundamental to quantum physics, poses perhaps the most difficult problems for the understanding of quantum theory. There are still open questions regarding quantum measurement, such as: why and how does the wavefunction collapse happen; and what exactly are the fundamental precision limits imposed on measurements by the principles of quantum mechanics? With today's rapid progress in technology, high-precision measurements are approaching the ultimate quantum limits. Recent advances, particularly in the area of quantum information science, have led to heightened interest in the fundamental limitations on the achievable quantum measurement accuracy. The unique characteristics of the quantum world, such as Bell non-locality, Einstein-Podolsky-Rosen steering, and entanglement [1,2], are actually linked with uncertainty relations for errors in joint measurements [3,4] more than with the traditional defined uncertainty relations that only address the necessary dispersion in the system observables prior to the measurement. Therefore, scrutinising the lowest error bounds allowed by measurement inaccuracy has become important for current investigations into fundamental quantum physics. The question of error bounds for joint measurements of incompatible observables was already raised by Werner Heisenberg in 1927, who proposed an answer with his famous uncertainty relation [5]. The standard textbook version of the uncertainty relation is the Robertson-Schrödinger inequality: ∆A∆B ≥ | [A, B] |/2, where ∆A and ∆B are the standard deviations of two non-commuting operators A and B and the lower bound is given by the expectation value of the commutator of these operators. This relation concerns separate measurements of A and B performed on two ensembles of identically prepared quantum systems. Importantly, it is conceptually different from Heisenberg's idea of a trade-off for the errors of approximate simultaneous or successive measurements performed on the same system [6,7,8,9]. There are thus two distinct operational aspects of the uncertainty principle [10]: (a) the preparation of a state with both A and B having well defined values is impossible; (b) a measurement of A inevitably disturbs any subsequent or simultaneous measurement of B. Statement (a) paraphrases the content of the Robertson-Schrödinger inequality as an expression of preparation uncertainty. In contrast, (b) points to a necessary trade-off between the inaccuracy in an approximate measurement of A and the disturbance of a subsequent or simultaneous measurement of B [11]. The latter trade-off constitutes a measurement uncertainty relation (MUR) (or, in the case of successive measurements, more specifically called an error-disturbance relation (EDR)). In recent years there have been debates over the formulation of MURs or EDRs due to disagreements on what constitutes an appropriate quantification of error and disturbance in quantum measurements. New inequalities for uncertainty relations in the spirit of (b) were independently proposed [11,12,13,14,15,16,17,18,19] with some of them later verified experimentally [20,21,22,23,24,25,26,27,28]. Here we focus on the approach of Busch, Lahti and Werner (BLW) [19,29], which is based on Figure 1. Schematic of optimal joint measurement for verifying a Heisenberg-type measurement uncertainty relation. The quantum apparatus carries out approximate measurements of the incompatible observables A and B by joint measurements of the compatible observables C and D. In our experiment, C and D cannot be detected directly, but obtained from the POVM M which is measured experimentally. The aim is to obtain optimal joint measurements by choosing appropriate measurement settings so as to minimise the errors ε a and ε b , as defined in the text via the Wasserstein distances employed in the BLW approach. the choice of two compatible observables C and D to approximate two incompatible observables A and B (Fig. 1). The error for (say) C as an approximation of A is defined as the worst-case deviation between the probability distributions of A and C across all states. Being state-independent quantities, these errors are figures of merit characterising the performance of the measuring device. However, a question arises: how to exactly determine the precise boundary line for the admissible error region [30,31,32], which is crucial not only to the foundation of the Heisenberg's uncertainty relation, but also to realistic quantum operations. The present work reports the first experimental confirmation of the optimal error bound, based on the realisation of a joint measurement scheme ( Fig. 1) proposed by Yu and Oh [31]. The Yu-Oh scheme is designed to achieve ultimate lower bounds for the error pairs by optimising the joint measurement. Although the scheme was later clarified with physical interpretation [32], it remained unclear whether it is adapted to operations in real physical systems. Our experiment utilises the spin of a pure quantum system, a single trapped 40 Ca + ion. Compared with other experimental setups that operate with ensembles, the single trapped ion can provide more direct and credible evidence as a verification of quantum foundational predictions. By unitary operations under carrier transitions, we demonstrate with high-level control the optimal error bounds achievable by joint measurements of two compatible observables of a qubit encoded in the ion. As witnessed below, our results test precisely the tight bounds of the error trade-off relation and characterise completely the admissible error region. Our experiment constitutes a direct test of the relevant uncertainty relation in the precise sense explained in [33]: it provides a comparison of the relevant statistics and hence is based on a true error analysis. In contrast, the tests reported in [20,21,22,23,24,25,26,27] regard the purported error quantities as some quantum mechanical expectation values that are to be determined by statistics of experiments that have nothing to do with an error analysis whatsoever [29]. Hence our results are more directly relevant to the exploration of the fundamental quantum limits of high precision measurements. The experimental system A single 40 Ca + ion is confined in a linear Paul trap, whose axial and radial frequencies are ω z /2π = 1.01 MHz and ω r /2π = 1.2 MHz, respectively. Under the magnetic field of 6 Gauss, we encode the qubit in |4 2 S 1/2 , m J = +1/2 as | ↓ and in |3 2 D 5/2 , m J = +3/2 as | ↑ , where m J is magnetic quantum number (see Fig. 2(a)). We couple the qubit by a narrow-linewidth 729-nm laser with wave vector k at an angle of 22.5 o to the trap z-axis, resulting in a Lamb-Dicke parameter of η z ≈ 0.09. After Doppler cooling and optical pumping, the z-axis motional mode is cooled down to the vibrational ground state with the final average phonon numbern z < 0.1 by the resolved sideband cooling. The ion is initialised to | ↓ with a probability about 98.7%. With the 729-nm laser pulses, the system evolves under the government of the carrier-transition operator where θ = Ωt is determined by the evolution time, Ω and φ are the Rabi frequency representing the laser-ion coupling strength and the laser phase, respectively. Each experimental cycle is synchronised with the 50-Hz AC power line and repeated 40,000 times. The 729-nm laser beam is controlled by a double pass acousto-optic modulator. The frequency sources for the acousto-optic modulator are based on a direct digital synthesiser controlled by a field programable gate array. Employment of the direct digital synthesiser helps the phase-and frequency-control of the 729-nm laser during each experimental operation. The optimal error trade-off relation Before presenting our experimental results, we first describe the Yu-Oh proposal briefly in a geometric way, the main idea of which is graphically sketched in Fig. 2(b). The aim is to find the optimal trade-off between the errors in joint measurement of two incompatible observables. We consider two generally incompatible qubit observables A and B, represented by the operators a · σ and b · σ, respectively. Here a, b are unit vectors and σ = (σ x , σ y , σ z ) is the vector whose components are the Pauli matrices. The angle between a and b is specified by sin χ = a × b . As approximations of A and B, we employ positive operator-valued measures (POVMs) C and D, whose first moment operators are c · σ and d · σ, respectively, with c , d ≤ 1. Thus, as indicated in ). The errors ǫ a and ǫ b are given, respectively, by ǫ a = a − c and ǫ b = b − d , whose operational meaning is explained in the text. (c) Experimental implementation steps and the corresponding states of the system. The ion is first laser-cooled down to nearly the ground state of the quantised vibration. The system starts from the qubit state | ↓ and evolves to |ζ under the preparation pulse U C (θ 1 , φ 1 ). Then the measurement pulse U C (θ 2 , φ 2 ) rotates the system to the measurement state |ξ , followed by the detection on | ↑ . See details in Subsection 3.1. Fig. 2(b), the errors for these observables are given by ǫ a = a − c and for which the optimal error trade-off relation is given below. To describe the optimisation procedure, we first recall that the constraint of the compatibility (or joint measurability) of the observables C and D is equivalent to the inequality f (c, d) ≡ c+d + c−d ≤ 2. For any fixed c (d), this inequality defines an ellipsoid that restricts the possible choices of d (c). This ellipsoid, centred at the origin, lies inside the unit ball and has the major axis given by a diameter in the direction of c (d). It indicates that optimal approximation errors will be achieved by choosing c and d in the plane spanned by a and b. Thus one may consider two ellipses in that plane to characterise the compatibility of C and D, see Fig. 2(b). To minimise the errors in this case, we draw two circles centred at the end points of the vectors a and b, so that the errors ǫ a and ǫ b are the radii. To carry out the optimisation, we fix an error value, e.g., ǫ a , which can be realised by many values of c. For each of these possibilities, we can find a specific d that gives the smallest value of ǫ b . Among all those ǫ b , we choose the smallest. The chosen pair (ǫ a , ǫ b ) satisfies the geometric condition, shown in Fig. 2(b), that the two circles are tangent to their corresponding ellipses and a−c is perpendicular to b − d, and thereby lies on the lower boundary curve of the admissible error region. As such, the optimal, minimal distances from the vectors a, b to vectors c, d inside the relevant elliptic regions defined by f (c, d) ≤ 2 occur at the boundaries, i.e., f (c, d) = 2. Under this condition, we obtain the optimal results and refer to (ε a , ε b ) as the optimal worst-case (OWC) error pair. As proven in [31], the OWC error pairs provide an ultimately tight lower bound for the error trade-off relation. The mapping ϕ → (ε a (ϕ), ε b (ϕ)) describes a curve in the (ǫ a , ǫ b )space that marks the boundary between the regions of admissible and inadmissible error pairs for the compatible observables C and D. For our purpose, we define ϕ to be the angle that satisfies sin ϕ = (1 − d 2 )/(1 − (c · d) 2 ) and cos ϕ = (1 − c 2 )/(1 − (c · d) 2 ) for c and d such that f (c, d) = 2 is given with ϕ ∈ [0, π/2]. Then a Heisenberg-type MUR for the pair of observables can be written as a family of error trade-off relations [31], which collectively describe the admissible error region. The physical interpretation of this inequality is that of an intricate interplay between the incompatibility of A, B and the unsharpness of C, D (required by their compatibility) resulting in a lower bound to the approximation errors ǫ a and ǫ b . For any value of ϕ, the equality in Eq. (4) is achieved only for a particular pair (ε a , ε b ) of OWC error values. Eq. (4) represents a trade-off relation between the errors, i.e., the incompatibility of the target observables A, B (through the term sin χ which is a function of the commutator of a · σ and b · σ), and the parameter ϕ. An interpretation of ϕ can be given in terms of the degrees of unsharpness of the compatible observables C and D (defined by u 2 32]. Putting ϕ = π/4, Eq. (4) reduces to the simple inequality considered in [19], This relation is weaker than Eq. (4), and the straight line defined by it in the (ǫ a , ǫ b )-plane touches the convex region defined by Eq. (4) exactly in the point where The maximal lower bound, i.e., 2 − √ 2, occurs in the case that the vectors a and b associated, respectively, with A and B are perpendicular to each other (χ = π/2). The OWC errors of the experimental measurement are directly determined from a comparison of the statistics of A (B) with the statistics of C (D). The lower bound errors are given by where the measurement probability distributions are p X ± ρ 1,2 =Tr[X ± ρ 1,2 ] with the quantum states determined by ρ 1,2 = (I + r 1,2 )/2 and X ± = (I ± x · σ)/2 for x = a, b, c and d. As seen from Fig. 2(b), the OWC errors defined in Eq. (6) appear under the conditions that r 1 (r 2 ) is parallel to a − c (b − d) and r 1 is perpendicular to r 2 . The single-qubit measurement The incompatible observables A and B are directly measured in a single qubit, but compatible observables C and D are obtained by joint measurements on a POVM M = {M µν }, given by the rank-1 positive operators with µ, ν = ±1. As such, we obtain the marginality relations C ± = M ±+ + M ±− and D ± = M +± + M −± , where C ± = (I ± c · σ)/2, D ± = (I ± d · σ)/2 and we simply write where P ± = (1 ± h)/2. Figure 3. Experimental values of phases in the preparation pulse U C (θ 1 , φ 1 ) and the measurement pulse U C (θ 2 , φ 2 ) for A = σ y and B = √ 3 2 σ y + 1 2 σ z with χ = π/6. (a) θ 1 for 729-nm laser pulse with the prepared states of ρ 1 and ρ 2 denoted by solid and dashed curves, respectively, and the phases φ 1 being zero. (b) θ 2 in the measurement pulse U C (θ 2 , φ 2 ) with M ++ , M +− and M −+ denoted by solid, dashed and dasheddotted curves, respectively. For clarity, we omit the curves for the cases of A + and B + , in which θ 2 are fixed to be π and π/3, respectively. (c) φ 2 in the measurement pulse U C (θ 2 , φ 2 ) with M +− and M −+ denoted by dashed and dashed-dotted curves, respectively. For clarity, we omit the curves for the cases of M ++ , A + and B + , in which φ 2 is fixed to be zero. The single-qubit experimental steps for optimal joint measurement is schematically illustrated in Fig. 2(c). We initialise the ion in the state | ↓ by optical pumping, and then prepare the ion to the target state |ζ (see Appendix A) which is determined by ρ 1,2 by a carrier-transition pulse U C (θ 1 , φ 1 ). The next step is to measure the observables A ± , B ± and M ±,± . The measurement process includes a measurement pulse U C (θ 2 , φ 2 ) steering the state from |ζ to |ξ , followed by a detection. The two U c operations take several µs by the 729-nm laser, respectively. In general, the pulse lengths for the successive two processes are smaller than a Rabi period (2π), implying a duration less than 18 ‡ As clarified below, if two qubits are employed, with one of them as an ancilla, for constructing the POVM M µν , we may experimentally test more curves as the error trade-off relations [19], where the lower bounds of the Heisenberg's uncertainty relation are just the OWC errors demonstrated in the present paper. In this sense, our single-ion experiment, with more precision in control than the two-ion counterpart, is the best candidate for verifying the boundary line for the admissible error region of the Heisenberg's uncertainty relation. µs. Finally, the probability of finding the ion in the | ↑ state is detected by collecting light scattered on the dipole transition and counting the emitted photons for 4 ms by the photon multiplier tube. We exemplify A = σ y and B = √ 3 2 σ y + 1 2 σ z to illustrate the details in the process of the optimal joint measurement. The relevant phases θ 1,2 and φ 1,2 for the U c operations are indicated in Fig. 3(a-c). Experimental observation of the qubit MUR We focus on the data set in Fig. 4(a,b) for A + , C + , B + and D + with χ = π/6 under the optimal approximation. With the increase of ϕ, one has a better approximation of C to A while the difference of D from B becomes larger, reflecting the error trade-off for these two incompatible observables. In the limit of C or D becoming a perfect approximation, the error for the other reaches its maximum required by the compatibility of C, D and given the incompatibility of A, B. The experimentally observed optimal errors (ε a , ε b ), plotted in Fig. 4(b), agree well with the theoretical prediction. Under the condition of f (c, d) = 2 for optimal approximations, the maximal value of the OWC error along the optimal error curve is sin χ (reached when the other error is zero), which is a direct measure of the incompatibility of the pair of the incompatible observables: both the error peaks and sin χ are 0.5 in Fig. 4(b). By contrast, in the case of f (c, d) < 2, the peak values of errors will be larger than sin χ. In any case, the approximations of A by C and of B by D are still subject to the error trade-off relation of Eq. (4). To further understand the observation of the positive operators, we may check Fig. 4(c) for the unified probability distributions of the measurements. To experimentally test the universality of the error trade-off relation, we have explored different pairs of incompatible observables and measured corresponding optimal errors as shown in Fig. 5. In the left-hand side panels of Fig. 5, the lower bounds examined experimentally are in good agreement with the theoretical prediction by Eq. (4). The regions under the curves represent corresponding forbidden areas, enforced by the incompatibility quantity sin χ in accordance with the MUR. As plotted in the right-hand side panels of Fig. 5, the experimental witness of Eq. (5) also fits well the theoretical prediction, where each lower bound reaches the minimum at ϕ = π/4, where A and B are approximated with equal errors. Experimental imprecision In our experiment, the typical errors are from imperfection of initial-state preparation and the final-state detection, and from heating due to the radial phonons as well as from the statistical errors. The former three of the imperfection factors are experimentally determined errors, which can be partially corrected. In contrast, the statistical errors are not correctable, but contained in standard deviation indicated by error bars. Moreover, the inherent decay and dephasing times of the qubit are, respectively, 1.1 s and 2 ms, whose detrimental effects are negligible during the short periods (∼18µs) of our operations. The Rabi frequency of 729-nm laser in our system is 54(2) kHz and the occupation probability of the initial state is about 98.7(4)%. The detection error yields a mean deviation of 0.22(8)%. The thermal phonons from the radial direction creates an additional dephasing effect on qubit system, yielding the depasing time of 0.24 (15) ms. These imprecision can be partially calibrated by practical methods [34,35]. In contrast, the fluctuation in above three imperfections, which is not correctable, is due to instability of the laser power and the magnetic field. The instable laser power leads to random variation of laser intensity and the small fluctuating magnetic field shifts the resonance frequency randomly. Both the unstable factors lead to inaccuracy in initialstate preparations and qubit operations, whose detrimental effects are assessed to be less than 2% from the Rabi oscillation in our case and involved in the standard deviation represented by the error bars. The statistical errors are due to quantum projection noise, a typical environmentinduced noise from vacuum fluctuation. These errors are inevitable in any quantum mechanical measurement, but can be reduced by more measurements and/or by (5). The solid curves denote the analytical results of ε a + ε b , the dashed lines represent the lower bounds √ 2( √ 1 + sin χ − 1) and the shaded regions represent the forbidden areas. The dots in the left-hand side and right-hand side panels are experimental values. From top to bottom, the panels represent sin χ = 1, √ 2/2, 1/2 and 1/3, respectively. Since we set A = σ y , the panels from top to bottom correspond to B = (2 √ 2σ y +σ z )/3, ( √ 3σ y +σ z )/2, √ 2(σ y +σ z )/2 and σ z . Each data point consists of 40,000 measurements and error bars are given by standard deviation. quantum correlation. By Monte Carlo simulation, we assess the statistical deviation for measuring the probability distribution to be 0.0025 and for each of the error pairs to be less than 0.01. These imperfections are involved in the standard deviation represented by the error bars. Conclusion An appropriate understanding of the uncertainty relations is essential to our exploration of new physics and precision measurements. There have been suggestions that enhanced measurement precision may be achievable by beating the so-called standard quantum limit through strategies of getting around the uncertainty relations [36]. It has also been found that the uncertainty relations can contribute to a deeper understanding of nonlocality [37,38]. Uncertainty relations have also been employed to prove the security of quantum key distribution [39] and explore the influence of quantum memory [40]. Our experiment provides the first evidence of confirming the MUR in an optimal joint measurement on a pure quantum system -a single ultracold trapped-ion system. The tests performed cover a range of choices of target observables and demonstrate optimality by sampling joint measurements with error pairs distributed along the whole lower boundary of the admissible error region. Previous theoretical proposals and experimental tests of error trade-off relations ( [12]- [17] and [20]- [26]) are based on the EDRs that are not generally amenable to a direct comparison of the approximating and target observables-unless one restricts the class of the former to those that commute with the latter; in that case, one cannot consider the trade-off relations in question to be universal [33]. In contrast, the determination of the OWC errors in the present experiment is obtained through a direct comparison of the statistics of the observables C and D with those of A and B. Our experiment therefore constitutes the first direct test of a MUR. In practice, our methods and findings, verifying the optimal joint measurements, are of potential applications in quantum information science, e.g., by providing new calibration protocols for quantum precision measurements or by leading to new ways of guaranteeing the information security in quantum cryptographic protocols.
5,839.4
2017-05-03T00:00:00.000
[ "Physics" ]
The economics of electric roads In this paper we present a method for evaluating social benefits of electric roads and apply it to the Swedish highway network. Together with estimated investments costs this can be used to produce a cost benefit analysis. An electric road is characterized by high economies of scale (high investment cost and low marginal cost) and considerable economies of scope (the benefit per kilometre electric road depends on the size of the network), implying that the market will produce a smaller network of electric roads, or charge higher prices for its use, than what is welfare optimal. For this reason, it is relevant for governments to consider investing in electric roads, making the cost-benefit analysis a key decision support. We model the behaviour of the carriers using the Swedish national freight model system, SAMGODS, determining the optimal shipment sizes and optimal transport chains, including mode and vehicle type. We find that if the user charge is set as to optimize social welfare, the revenue will not fully cover the investment cost of the electric road. If they are instead set to optimize profit for the operator of the electric road operator, we find that the revenue will cover the costs if the electric road network is large enough. Electric roads appear to provide a cost-effective means to significantly reduce carbon emissions from heavy trucks. In a scenario where the expansion connects the three biggest cities in Sweden, emissions will be cut by one-third of the overall emissions from heavy trucks in Sweden. The main argument against a commitment to electric roads is that investment and maintenance costs are uncertain and that, in the long run, battery development or hydrogen fuel cells can reduce the benefit of such roads. Introduction There has been a surge of interest in reducing carbon emissions from heavy trucks in recent years, largely due to ambitious emission targets for transport in many countries as well as in the European Union. While light traffic and probably also regional freight distribution trucks can be electrified using batteries, this is a bigger challenge for long range heavy trucks. The latter would need heavy batteries or frequent recharging incurring delays. For this reason, electric roads, with continuous electricity transmission, has been developed and tested in Sweden and in Germany. In this paper we present a method for evaluating social benefits of electric roads and apply it to the Swedish highway network. Together with the investment cost this can be used to produce a cost benefit analysis. The electric road is characterized by economies of scale (high investment cost and low marginal cost) and considerable economies of scope (the benefit per kilometre electric road depends on the size of the network), implying that the market will produce a smaller network of electric roads, or charge higher prices for its use, than what is welfare optimal. For this reason, it is relevant for governments to consider investing in electric roads, making the cost-benefit analysis a key decision support. There is, however, prior to this paper, no literature developing methods for assessing the economic rationale of electric roads. We assume that all trucks that can receive electric power while in motion are hybrids, such that they also have a diesel engine to be used on non-electrified parts of the road network. This makes the hybrids more expensive to buy than a conventional diesel truck. The user charge of the electric road can either be set as to optimize welfare or to optimize the profit for the operator of the road. We calculate the net benefit cost ratio (NBCR) and cost recovery in both cases. We also outline arguments for private and publicly owned electric roads. The benefit of the electric roads depends on the number of trucks using them. The use depends on the total volume of trucks and the number of these that are (electric-diesel) hybrids. The number of diesel trucks that haulage companies would eventually replace by hybrids will be determined by the profit that they can make from such replacements, assuming that they behave to optimize their profits. The carriers' optimal number of hybrids depend on a) the spatial distribution of freight flows by commodity, b) the spatial distribution of the electric road network c) the difference in driving cost per kilometre between using diesel and electric power received from the electric road, and d) the difference in capital cost between the diesel and the hybrid truck. We model the behaviour of the carriers using the Swedish national freight model system, SAMGODS, determining the optimal shipment sizes, transport chain and route, including the mode (road, rail, sea) and vehicle type (Diesel60, Hybrid60, Diesel40, Hybrid40, Diesel24) 1 choices of the carriers for a given electric road network. Hence, we take into account that freight transport can divert also from rail and sea to road, if electric roads make freight transport by road cheaper. We make extensive sensitivity analyses with regard to factors b)-d) above. The impact of the spatial distribution of the electric road network is analyzed by assuming three different network scenarios: small, medium and large. The difference in driving cost per kilometre of using diesel or the electric road depends on the prices of diesel and electricity, respectively, and on the energy consumption of diesel trucks versus trucks powered by electricity. The operation cost will also be determined by the user charge on the electric roads. We will therefore vary future electricity prices, diesel prices and analyze the difference between welfare optimal and profit maximizing user charges. When assessing the cost of the electric road we assume the technology for overhead power lines because this is presently the most mature technology (there are other technologies using conduction or electromagnetic induction from below). In future years, electric roads using other technologies might also be relevant to analyze. Moreover, we assume that no electric roads exist outside of Sweden, which would likely increase the benefit of them in Sweden due to economies of scope and scale. Economies of scope and scale across Europe would be relevant to evaluate if countries choose to collaborate on the implementation of electric highways. The methodology of this paper could, however, still be used if extending the analysis or changing cost assumptions in this way. We find that the size (and location) of the network is of key importance for the use (and therefore the benefits) of the electric roads, hence we find economics of scope up to a threshold size. A key reason for the larger network being more profitable per kilometre (below a threshold size) is that the carrier's optimal number of hybrids increases with the size of the electric road network. However, when the most heavily used roads are already electrified, the marginal benefit per kilometre of electric road extensions declines with the size of the network. We find that electric roads will result in a significant reduction in carbon emissions from heavy traffic. In a scenario where the electric road system covers the highways connecting the three largest cities (Stockholm, Gothenburg and Malmö), CO2 emissions is estimated to decline by approximately 1.2 million tonnes in 2030, corresponding to one third of all carbon emissions from heavy trucks in Sweden. We find that if the user charge is set as to optimize social welfare, the revenue will not cover the investment cost of the electric road. However, if they are instead set to optimize profit, the revenue will cover the costs if the electric road network is large enough. Finally, we investigate if intermittent operation of the electric road (gaps in the electric transmission) can increase the net benefit cost ratio. On the one hand the investment cost can then be reduced, but on the other hand this would require the hybrids to have larger batteries to bridge the gaps of the electric roads. We find that intermittent operation is likely to increase the cost benefit ratio. Method To understand how the carriers will respond, we assume that the total demand for freight truck kilometres are . Assume further that out of , kilometres are fuelled by diesel (using a hybrid or a diesel truck) with the kilometre cost and that kilometres are fuelled by electricity received from an electric road with a kilometre cost . Now, if the extra capital cost of the hybrids, that can receive electric power from the electric road, compared to a standard diesel truck is . The carriers will determine the number of hybrids, n, they will buy by minimizing their transport cost I.e. the carriers will only invest in an additional hybrid if lower driving cost can compensate for the additional capital cost. The number of kilometres fuelled by electric power received from the road, given a positive kilometre cost difference − , is ruled by the function where is the length of the total electric road network, is the number of hybrids that the carrier owns, and is a constant. Carriers with few hybrids can reduce cost by letting the hybrids operate on routes having large overlaps with the electric roads. However, the more hybrids a carrier has, the more it will use the hybrids also on routes with less overlap. For this reason, the parameter <1. Moreover, the larger the electric road network is, the larger part of total routes will be covered by the electric roads. For this reason, we will have economics of scope implying α> 1. However, when the full length of the most heavily used parts of the road network is already electrified, we expect that increases slower than linearly, i.e. that α< 1. Carriers optimizing the number of trucks yields the first order condition of giving the optimum number of trucks Hence, the lower additional capital cost of the hybrid, the larger the kilometre cost difference − , and the more extensive the electric road system is, the more electric trucks will the carriers buy. Plugging (5) into (2) gives the resulting number of electric road kilometres A public operator maximizing welfare So how would the user charge of the electric road be set? Assuming a public operator, optimizing welfare, the first best optimal user charge should be set so that the user pays the full marginal external cost of use. The external costs of electric trucks include wear and tear of the road infrastructure and of the electric infrastructure, noise and accidents. These costs are partly internalized by the tax on electricity since we assume that the electricity production as such has no external cost. In Sweden the marginal electricity production does normally not cause any direct carbon emissions. Moreover, the EU-ETS system implies that no electricity used in transport in any EU country generates carbon emissions. In this paper we rule out other positive externalities of building electric roads than network externalities, such as learning by doing for electric roads and vehicles, simply because it is difficult to assess them. However, heavy diesel trucks are not charged for all external costs. In fact, both freight trains and heavy trucks pay less through taxes or rail fees than the external costs they incur on society. We therefore assume the second-best user charge, i.e. the optimal charge for using electric roads given the present tax on diesel fuel. Suppose again that the total number of truck kilometres is = + . Assume further that the non-internalized external cost of the electric road use is (taking only the tax on electricity into account) and that the non-internalized external cost of the diesel trucks is (taking the tax on diesel into account). Assuming inelastic demand , the second-best user charge is − . An operator maximizing profit The electric road operator is a monopolist who will set the price so as to maximize the profit The first order condition becomes where is the marginal cost of production per truck kilometre. Note that includes the cost of electricity (spot price, energy tax and grid fees) as well as the marginal cost of wear and tear on the electric road system. As before reflects the fuel cost per kilometre for the electric truck when it is fuelled by electric power received from road. The optimal price that the operator will charge is hence ( ) is determined by (5) and taking the derivative of ( ) with respect to we have Plugging (9) and (5) into (8) we find that Note that the optimal price, or the user fee, does not depend on the extra capital cost K for the electric trucks. On the one hand, the number of electric trucks and therefore decreases if the extra capital cost K increases, on the other hand the derivative decreases (in absolute amount) as K increases. Model To simulate the non-linear effect of the network size, we study the likely response of the carriers given three different electric road networks: • A small network consisting of E4 between Stockholm and Norrköping with a length of 31.5 km (sum of both directions) • A medium-sized network consisting of E4 between Stockholm and Malmö with a length of 121.1 km (both directions) • A large network consisting of the European roads between Stockholm and Malmö (E4), and Malmö and Gothenburg (E6) and the national road between Gothenburg and Jönköping (Rv 40). Total length 191.4 miles (both directions). The Swedish national freight model system, SAMGODS, builds on the basic assumption that the carriers minimize the transport cost. The transport cost functions include shipment time, varying by commodity, and pecuniary cost. The model uses freight demand for 34 commodity groups by production and consumption zones as input. Zones are on the municipality level in Sweden. In neighbouring countries zones correspond to the NUTS-2 level 2 , and in countries further away the zones are on the country (groups of countries) level. The model takes into account domestic freight transport demand, as well as international freight transport demand. The freight demand is disaggregated into demand between three size class levels of firms (small, medium and large) giving nine demand types. The optimal shipment sizes and optimal transport chains, including the choice of mode and vehicle type 3 are computed assuming that the carriers minimize transport cost. The 60-ton trucks are only allowed to operate in Sweden and Finland. Besides trucks, the model includes 7 train sets and 20 ships of varying types and sizes. Implementing electric hybrid truck in Samgods is done by changing distance transport costs and capital costs of the hybrid trucks. The hybrid truck has a higher capital cost (cost per year) than the standard diesel truck. On the other hand, the hybrid truck has a lower distance cost − , but only on road segments where electricity is used. 2 NUTS (Nomenclature of Territorial Units for Statistics) is a hierarchical classification system for a spatial division the EU territory. The spatial resolution of the NUTS 2 level distinguishes basic regions, meant to be used in evaluations of regional policies. Input data 3.1 CBA parameters and traffic growth The cost-benefit analysis is based on the forecast year 2030. For this year a main scenario with electric roads is compared to a base scenario, without the electric road. In the base scenario we assume that all heavy traffic is fuelled by diesel. Based on the comparison between the two 2030 scenarios, the total benefit of the investment is derived by assuming that the benefits and costs for the forecast year increase linearly with demand during the appraisal period 2025-2040. We assume that demand increases linearly with the Swedish Transport Administration's forecast of traffic growth, given in the leftmost column in Table 1. In a sensitivity analysis, we assume a significantly lower growth rate based on an average annual growth rate over the period 2000 to 2018, given in the rightmost column of the table. We assume the relatively short appraisal period of 15 years mainly because the technical development of batteries and fuel cells after 2040 is difficult to predict. If the electric road is not in place until 2030, the appraisal period extends to 2045. Note that the choice of opening year has a marginal effect on the resulting NBCR, since both costs and benefits are discounted if the opening year changes. All prices are given in the price level of 2018 and using the currency exchange rate of €1=SEK 10. The demand for rail and sea transport is reduced by electric roads, as lower road transport costs induce a transfer of freight demand from rail and sea to road. The annual growth rate of 1.4 per cent per year is assumed for rail traffic based on the Swedish Transport Administration's forecast (2018). For shipping, the annual growth rate of 1.9 per cent per year is assumed, also based on the Swedish Transport Administration's forecast (2018). The marginal cost of using public funds is taken to be 1.3 (Sørensen, 2010). Carbon emissions are valued at 0.114 €/kg in 2017, according to the Swedish appraisal guidelines ASEK 6 from 2019 (Swedish Transport Administration, 2018). This is approximately fourfive times the price of EU emission allowances in 2019. The price of carbon emissions equals the carbon tax on fuel, so that the tax perfectly internalizes the cost of carbon emissions. Now, the fuel tax consists of two parts, energy tax and carbon tax. Only the former equals (and internalizes) the carbon emissions. The energy tax had initially mainly fiscal motives, but is now rather viewed as internalizing other external effect of motor vehicles, such as noise, accidents and wear and tear of the road infrastructure. Both the value of carbon emissions and the carbon tax (and the energy tax) is assumed to increase annually by on average 1.5 percent in real terms over our appraisal period, so that in 2030 the value of carbon emissions is are 0.136 € /kg. The marginal cost of wear and tear on the electric road system of the electric road, mainly the contact wire, is taken to be € 0.088 per kilometre based on estimates of wear and tear on the electrical system for trains (Odolinski, 2018). All input parameters are summarized in Table 1. Since one of the main effects of electric roads is reduced carbon emissions, the value of carbon emissions has a considerable impact on their social benefit. The external costs of truck traffic in terms of accidents, noise and wear and tear on the road infrastructure are taken from Johansson and Johansson (2018). The external cost of healthhazardous emissions is taken from the Swedish appraisal guidelines. All external effects from rail traffic are not internalized in the track charges. The noninternalized external cost of rail traffic is on average 0.17 k€ per tonne kilometre (Nilsson and Haraldsson, 2018). Hence, society benefits from less rail transport. This is considered in the cost-benefit analysis. The non-internalized external cost of shipping is only on average 0.042 k€ per tonne kilometre, because they are nearly internalized by port and fairway fees (Vierth and Lindé, 2018). Price of driving The demand for using the electric road in the forecast year 2030 depends on the cost of diesel and the cost of driving on electricity, respectively. This is determined by energy consumption per kilometre, the user charge of the electric road, and the price of diesel and electricity, respectively, including taxes. In 2019, the spot price of electricity was 0.035 €/kWh in Sweden. The Swedish Energy Agency (2019) forecasts that the price will increase to 0.038 €/kWh in 2030 (partly due to that the price of emission permits is assumed to increase as the cap in the EU's trading system ETS is gradually lowered). These assumptions give a total electricity price of 0.079 k€/ kWh in 2030, including electricity tax, grid tariff and the spot price. In the main analysis we also assume that the diesel price, including fuel tax but not VAT, is 1.53 €/litre in 2030. The future electricity and diesel prices are uncertain. For this reason, we undertake sensitivity analyses. In the first sensitivity analysis we assume a doubled spot price of electricity 2030, i.e. 0.07 k€/kWh, yielding an electricity price in total of 0.11 k€/kWh (including tax and the electricity grid tariff). In the second sensitivity analysis we assume that the price of diesel is € 1.88 per litre in 2030 (including fuel tax but not VAT). Driving costs also include the costs of vehicle wear, maintenance, tires, value reduction etc. We assume that these costs are the same for electric and diesel trucks. According to Kühnel et al. (2018), a 40-tonne vehicle using the electric road will consume 1.51 kWh/km in 2030. A diesel truck consumes twice as much energy, 3 kWh/km, implying 3.1 litres of diesel per kilometre. 4 60-tonne trucks consume 30 percent more fuel than 40tonne trucks according to the Swedish Association for Road Transport Companies. We compute the second-best user charge as − . Hence, the difference in external costs per kilometre between driving on diesel and using the electric road should equal the corresponding difference in tax and user charge. According to table 3, the difference in marginal external cost between diesel and the electric road truck is 0.017 and 0.049 €/km, respectively (we assume that other marginal external costs are equal for diesel trucks and trucks using the electric road such that they cancel out). The difference in tax is 0.111 and 0.145 €/km, respectively. Note that the energy tax on electricity is not a user charge that covers infrastructure cost and such (this is covered by the grid charge). The energy tax on electricity is mainly fiscal but could just as well internalize external cost of wear and tear of the road infrastructure and accidents. These numbers imply that the welfare optimal user charge for the electric road is 0.094 €/km for 40 tonne trucks and 0.096 €/km for 60 tonne trucks (in 2030). Then the difference in external costs per kilometre between driving on diesel and using the electric road equals the corresponding difference in the sum of tax and user charge. We use figures from Kühnel m. fl. (2018) to compute the extra capital cost of the hybrid. They estimate the incremental cost of a hybrid with a 350-kW diesel engine and an electric engine of the same power to be € 50 000. They include a battery with a range of only 10 to 20 kilometres. The pantograph constitutes roughly half of the additional cost of the hybrid. But the electric engine is cheap and makes up only approximately one tenth of the additional cost. It is likely that the incremental capital cost of the hybrid declines over time as the price of the internal combustion engine is likely to increase due to more stringent exhaust requirements. But since the latter is more uncertain, we adopt the additional cost mentioned above in our analysis. Assuming a depreciation period of 7 years and interest 4 percent, this gives an incremental vehicle cost of €8 300 per year (roughly 7 percent). We disregard transaction and transition costs when the carriers replace their trucks, because all trucks will not be replaced at the same time allowing for a gradual transition to electric hybrids. However, one cannot rule out that such costs could reduce the speed of the transition to hybrids and therefore the optimal user charge the first years after the opening. In the third and fifth sensitivity analysis we assume intermittent electric power transmission, with gaps in the electric road. In these scenarios we assume that the hybrid trucks must be equipped with a battery with a range of 100 km. Kühnel et al. indicate that a battery of 175 kWh would allow a range of 100 km. The cost of such a battery is stated to be € 19 000 in 2030. The entire additional cost for the truck will then be € 19 000 + 50 000 = 69 000, yielding an extra yearly capital cost of €11 000. The operation costs used in the main analysis are summarized in Table 4. Sensitivity analyses Here we outline our sensitivity analyses, exploring the robustness of the social benefit of the electric road. The input data in the form of operation cost for the sensitivity analyses is summarized in Table 5. As mentioned above, in the first sensitivity analysis we increase the price of electricity by 0.0315 €/kWh. Hence, we reduce the cost difference between diesel and electric road operation. In the second sensitivity analysis we increase the price of diesel by € 0.210 per litre. In this case we increase the cost difference between diesel and electric road operation. Since it is mainly the relative difference in driving cost between diesel and electricity that impacts the use of the electric road, these two sensitivity analyses can be interpreted as analysing the sensitivity of any change in the variables impacting the price of driving on diesel and electric driving (taxes, grid tariffs, user charge, electricity and diesel price and fuel consumption). In the third sensitivity analysis, we assume intermittent electric power transmission, with gaps in the electric road infrastructure being equally long as the distance covered. We are reducing the investment and operation cost of the electric road by half. We assume that the hybrid trucks must therefore be equipped with a battery with a range of 100 km, increasing the additional yearly capital cost of the hybrids (see section 2.2). We double the user charge assumed in main analysis but assume also that distance travelled on the electric road is halved. Hence, the average user charge per vehicle kilometre along the electric road routes (including the gaps) will be the same as in the main analysis. Keeping the average user charge constant in this way (and therefore the total operation cost per kilometre) might be justified if the marginal cost of the wear and tear on the remaining electric road infrastructure increases. The fourth sensitivity analysis reflects a scenario where a profit maximizing monopolist operates the electric road. The operator will set the user charge to maximize profit according to equation (11). Using this equation in combination with the input parameters of the main analysis we find that the user charge increases by € 0.128 per km (40 tonne trucks) and € 0.178 per km (60 tonne trucks), respectively, compared to the main scenario where the user charge is based on the short-term marginal cost. Sensitivity analysis five assumes again the situation where a profit maximizing monopolist owns and operates the road´s electric infrastructure. In this analysis we assume intermittent electric power transmission, as in sensitivity analysis three, and assume that the hybrid trucks must be equipped with a battery allowing a range of 100 km, thus increasing the extra annual capital cost of the hybrid. The capital cost of hybrids and the investment and operating costs of the electric road are the same as in sensitivity analysis three. Again, we double the user charge assumed in main analysis and assume that distance travelled on the electric road is halved. Hence, the average user charge per vehicle kilometre along the electric road routes (including the gaps) will be the same as in the main analysis. This assumption is justified by equation (11), showing that the user charge will depend on the price of diesel and electricity, which both remain unchanged. The user charge also depends on the marginal cost of wear and tear on the electric road infrastructure, and we assume thus, as in sensitivity analysis three, that the wear and tear on the remaining electric road increase. Equation (11) also shows that the optimal user charge stays unaffected by the increase in the extra capital cost of the required hybrids with larger batteries. Investment cost There is yet no full-scale electric road system in the world and the investment costs of such systems are therefore uncertain. There are two electric road demonstration projects in Sweden using different technologies for transmitting electricity to the vehicle. The Elväg Gävle project has built a two-kilometre track using overhead lines. The E-road Arlanda demonstrates a conductor transfer technology from the road surface on a two-kilometre long test track. There are no cost estimates of a full-scale expansion of the Swedish demonstration projects that we judge to be realistic. There are, however, German assessments of the investment cost of conductive transmission from above. Boston Consulting Group and Prognosis (2019) estimates the cost per kilometre of electric road in both directions to € 2.5 million. The Fraunhofer Institute et al. (2018) and Sundelin et al. (2018) both assess the cost to just over € 1.7 million per kilometer in both directions. In this project, we assume the most mature technology using electricity transmission from overhead lines. PIARC (2018) assess the investment cost of this technology to € 2.2 million per kilometre of electric road in two directions. This figure includes € 0.4 million for increased transmission capacity from the regional road network. In addition, the Swedish Transport Administration, assess the cost of € 0.3 million per kilometre and direction for installation of railing and other required road equipment. Since we are studying investment in an existing highway, where the level of road equipment is already high, we have chosen to assume only half this cost. In total, we thus estimate the cost € 2.5 million per kilometre electric road in both directions. Since we assume a relatively short appraisal period for the electric road system under which the capital cost is written-off, only 15 years, we disregard the annual maintenance cost of the electric infrastructure. We do however assume a marginal cost of one additional vehicle using the electric road when computing the optimal user charge and operational cost of the electric road as described in section 3. Effects on carbon emissions The two leftmost columns of table 6 show the share of all truck vehicle kilometres in Sweden that is fuelled by electricity transmitted from the electric road in the main scenario. The next two columns show the total vehicle kilometres where the trucks are fuelled by the electric road. The subsequent column shows the total length of the electric road network (S). The final three columns show the total vehicle kilometres (thousands) by trucks fuelled from the electric road in relation to the total length of the electric road network V/(S•10 3 ). In summary, the table shows that the smallest electric road network has modest effects. The effect is substantially higher for the medium-sized electric road network. In fact, the number of vehicle kilometres using the electric road per kilometre of the electric road network is twice as large for the medium-sized network compared to the smallest network. (For 40 tonne trucks the number of vehicle kilometres using the electric road per kilometre of electric road increases from 133 000 to 276 000). Hence it illustrates (as predicted in section 2) the existence of economics of scope, i.e. that the number of vehicle kilometres using the electric road increases faster than linearly with the length of the electric road network (implying α> 1). However, the economies of scope only exist up to a threshold size of the network. When the size of electric road network increases further to the largest network, the number of vehicle kilometres per kilometre on the electric road increases slower than linearly with the length of the electric road network (implying α< 1). If connecting the electric road network between the three biggest cities in Sweden, just below one third of the totally involved vehicle kilometres could be fuelled by electricity. The reduced cost of road transport implies that freight transport diverts from rail and sea to road, increasing the vehicle kilometres by truck. The electric road impacts the vehicle kilometres of trucks in Sweden as shown in the first four columns of Table 7. The effect is again modest for the smallest electric road network, and larger for the medium-sized and large networks. In the latter, the vehicle kilometres produced by 40 and 60 tonne trucks, respectively would increase by a little less than 4 percent. However, the vehicle kilometres produced by light trucks declines such that, in total, the truck kilometres increase by only 2.5-3 percent. The final three columns of the table show how freight transport measured in tonne kilometres change by mode: as expected it increases for road transport and is reduced for rail and sea transport. Table 8 shows that in the main scenario, the social benefits of the electric roads are larger than their social cost in the three electric road network scenarios that we analyse. The largest benefit stems from operation cost savings for carriers because it is cheaper to operate trucks on electricity compared to diesel. These cost savings are substantially larger than the reduction in carbon tax revenue for the government because it is cheaper to operate the trucks on electricity than on diesel even if disregarding the carbon tax. The second largest benefit is the savings in carbon emissions. CBA The net of the (negative) marginal cost of wear and tear on the electric road system and the (positive) value of the reduced (carbon and health-hazardous) emissions are as expected slightly larger than zero (following from to the marginal costs per kilometres given in 3.2). The net of the tax revenue (diesel and electricity) and user charge is mildly positive. If demand had stayed constant, the change in revenue (slightly negative) and externalities (slightly positive) should exactly have balanced since we set the user charge to − , and as the externality per vehicle kilometre is lower for hybrids. However, because new road traffic is generated the revenue is positive. The generated road traffic also increases other external (negative) effects (wear and tear of the road infrastructure, noise and accidents) The new road traffic is generated as an effect of diversion of freight transport from sea and rail. Reduced rail traffic generates a positive socio-economic effect as the external effect of rail traffic is not fully internalized by the track charges. However, there might be negative effects in the form of increased transport cost for rail, due to scale economies when demand is shrinking. On the other hand, there might be positive effects of reduced congestion on the rail tracks. Since these two effects are not known we chose to omit them. The investment cost is lower than the net benefit and the NBCR is therefore positive. The NBCR is highest for the medium-sized network, 1.02. Given that the medium-sized electric road network was already built, an extension of the electric road system further (according to our third case) would have a NBCR of 0.34. Even for the smallest network the benefits would be larger than the costs. Table 9 present the benefit-cost analyses for our five sensitivity cases. The reduction of carbon emissions and NBCR is relatively robust in sensitivity analysis one and two, with fairly large variations in diesel and electricity prices. Both the medium-sized and the large electric road network have a positive NBCR even with a sharp increase in the future price of electricity (the small network has however then a NBCR below zero). Sensitivity analyses Sensitivity analysis three, assuming intermittent electric transmission (gaps in the electric road network) and that the hybrid trucks are equipped with a battery with a range of 100 km, consistently yields higher NBCR than the scenarios with electricity transmission along the entire route. In this analysis, there is also an additional social benefit not included in the calculation: larger batteries can also be used for electric driving on roads that are not electrified (which the Samgods model cannot take into account). Note that it might be more profitable for both the electric road operators and the carriers if the trucks were to be equipped with even larger batteries. It is possible that already 250 KWh battery would allow coverage over most routes outside the motorways, depending on country and route. In that case the trucks might not need a diesel engine, reducing the cost of the trucks and the fuel. A larger battery would also allow longer gaps in the electric road along the motorway, reducing investment and maintenance cost. Sensitivity analysis four, assuming a profit maximizing operator of the electric road, shows that for the medium-sized network, the revenue from the user charges almost cover the investment and maintenance costs. The NBCR is higher than in the main scenario, because the investment cost is lower as public funding are replaced by funding from user charges. Public funding causes deadweight losses as reflected by the MCPF. The benefit of reducing carbon emissions decreases by 20-25 percent, as it becomes more costly for carriers to use the electric road. The carriers´ profits shrink substantially in this scenario, as their transport cost is reduced less due to higher user charges. Sensitivity analysis five assumes intermittent electric transmission in combination with a profit maximizing operator. Since the investment cost is lower in this scenario, the revenue from the user charges amply covers the investment and maintenance costs of the large and medium-sized networks (the revenue from the user charge corresponds to 130 and 140 percent of investment and operating costs, respectively). The NBCR is lower than in sensitivity analysis three, since the lower investment cost due to user financing does not fully outweigh the lower reduction of carbon emissions. In the sensitivity analysis where we assumed a lower growth rate of truck traffic, based on the past trend instead of the forecast (see Table 1) the NBCR reduces but only slightly, see Table 10. Increasing the value of carbon emissions increases the NBCR substantial. In all sensitivity analysis, the NBCR is highest for the medium-sized electricity network. Conclusions For all the electric road scenarios that we analyse, the social benefits of the electric road are larger than the social cost. The largest benefit stems from operation cost savings for carriers, simply because it is cheaper to fuel the trucks on electricity than on diesel. The second largest benefit is the reduction of carbon emissions. The NBCR and the reduction in carbon emissions per invested euro is highest for the medium-sized network, indicating economics of scope up to a network size threshold. The reduction of carbon emissions and NBCR is relatively robust in sensitivity analysis one-two, based on fairly large variations in diesel and electricity prices. Intermittent electric transmission increases the NBCR due to lower investment cost, though this alternative requires larger batteries and thereby increases the costs of hybrid trucks. If the user charge is set to optimize welfare, the revenues cover the marginal cost of the wear and tear on the electric road. Assuming a profit maximizing operator of the electric road, the revenue from the user charges almost covers the investment and maintenance costs for the medium-sized network. If we assume intermittent electric transmission, the investment and maintenance costs are fully covered in all electric road scenarios. However, if user charges are set by a profit maximizing monopolist, the reduction in carbon emissions decreases by 20-25 percent, as it becomes more costly for carriers to use the electric road. Several arguments can be made for public operation and ownership of electric roads. First, it is unlikely that private investors would be willing to take the risks of such an investment. There are at least two major investment risks. The first is that investment and maintenance costs become larger than estimated. Costs are uncertain as there is no full-scale electric grid network in operation yet. The second risk is that the battery development may in the longer run enable 100 percent battery operation. The use of fuel cells is another competing technology, currently too expensive but perhaps gaining ground in the long term. However, the solution of applying intermittent electric transmission requiring trucks equipped with larger batteries (of which some might even not even have a diesel motor), would allow some hedging of the risk as reduces the investment cost but still accomplish the same fuel cost and carbon emission reduction. Second, the large economies of scale and scope (i.e. that the electric road network are extensive for the potential benefits to be fully realized) make the investment risky and probably to extensive for a private investor. This is demonstrated by the result that the smallest electric road network is least profitable and cannot be fully financed by user charges. Third, a private operator would eventually need to be regulated. As long as diesel trucks remain, the user charges set by the monopolist electric road operator are restricted by the competition from them. However, if the diesel trucks are completely outcompeted, the pricing of the electric road needs to be publicly regulated. Fourth, private ownership of the electric infrastructure on a publicly owned road implies divided ownership and responsibility for maintenance. This can lead to losses in efficiency and raise liability issues and associated risk management. In summary, electric roads appear to provide a cost-effective means to significantly reduce carbon emissions from heavy trucks. In the scenario where the expansion connects the three biggest cities in Sweden, emissions will be reduced by approximately 1.2 million tonnes in 2030, which corresponds to approximately one-third of emissions from all heavy trucks in the country. The main argument against a commitment to electric roads is that investment and maintenance costs are uncertain, and that in the long run, battery development or hydrogen fuel cells can reduce the benefit of electric infrastructure. We have tried to take the latter risk into account by assuming a calculation and depreciation period of only 15 years, until 2040, but it is nonetheless a risk. It remains an open question as to whether this result can be transferred to other countries. On the one hand, Sweden has low electricity prices increasing the benefits of electric roads. On the other hand, Sweden has also long distances compared toits small population, reducing the benefits. Finally, the large economies of scope indicate the benefit of coordinated expansion of electric roads in Europe.
9,777.6
2021-04-01T00:00:00.000
[ "Economics" ]
Is Automatic Facial Expression Recognition of Emotions Coming to a Dead End? The Rise of The New Kids on the Block Hatice Gunes’ work is partially supported the EPSRC under its IDEAS Factory Sandpits call on Digital Personhood (Grant Ref: EP/L00416X/1). Hayley Hung was partially supported by the Dutch national program COMMIT, by the European Commission under contract number FP7-ICT-600877 (SPENCER), and is affiliated with the Delft Data Science consortium. that are felt inside the body are displayed externally via the face, and these in turn can be universally mapped into the six categories of happiness, sadness, surprise, fear, anger and disgust. In reality though felt emotions are not always so visibly manifest because the experience is subjective, nor do they map cleanly to Ekman's six categories. Another limitation of this approach is that expressive facial signals are highly context dependent and will communicate different things in different context -emotions, cognitive load, back-channelling, turn-taking, etc. In the early 1990s, a number of facial expression recognition researchers had a motivation of revolutionising the way we interact with technology [1] by enabling it to become more human like. By being able to analyse human emotions through the displayed facial expressions and responding to these in an appropriate and meaningful way, machines would become more intuitive and emotionally and socially intelligent. This paved the way for novel computer vision techniques for analysing people's facial expressions. It has been over 15 years since [1] was published in the IEEE Transactions on Pattern Analysis and Machine Intelligence. Since then, AFER, and in particular recognising the six categories of emotions, have received a lot of attention in both the computer vision research community and the press. In addition to the computer vision researchers, by now AFER has received considerable attention from machine learning researchers, which is understandable. For many problems, where the (input) sensing conditions and output labels are more or less standardised, nearly frontal faces, constant/acceptable illumination conditions and the six emotion categories in this case, researchers without expertise in the relationship between felt emotion and displayed expression can come in and apply their different techniques to solve this input-output problem on publicly available datasets. This trend has caused many people outside the AFER research field, and in particular the media, to believe that facial expression recognition is a solved problem. However, AFER researchers have frequently reported that while expression classification works reasonably well for posed expressions, such as posed smiles, their performance drops quite dramatically on spontaneous expressions elicited during natural conversations and dayto-day interactions [2], [3]. One of the biggest issues is working out how to obtain ground truth labels for spontaneous expressions and modelling the fact that individuals have subjective and idiosynchratic ways and scales of expressing emotions. As announced recently by the Wall Street Journal, Apple has just bought Emotient [4], 'a startup company that utilises artificial intelligence to analyze facial expressions and read emotions'. With the acquisition of Emotient by Apple, we can confidently state that the biggest success of AFER research field has been the spin out companies such as Affectiva and Emotient in the USA, and CrowdEmotion in the UK. These companies mainly deliver market research related output, i.e. analysing how much viewers smile while watching an advert or a movie clip. Another 'lighter' application has been the smile detector embedded in digital cameras, and mobile apps that enable someone's facial expression to be modified and morphed, possibly for sharing with their social network for fun and entertainment. Coming to a Dead End? On the one hand it has been great to see the growth in research in this domain -recognising Ekman's six categories in clean conditions is now a solved problem. On the other hand, we can ask whether this has led to a sufficient growth in the AFER area as most of the new researchers that are coming in from outside assume that the inputs and outputs are already a well understood phenomenon. As mentioned earlier, we know that the six categories of emotions have no use for the majority of everyday applications. This simplification of the task, while serving us well in the early days, needs to change significantly. This forces us to move into uncomfortable territory where we have to ask ourselves the more fundamental questions like: what is the contemporary definition of emotion in this technologically-driven fastchanging world that is very different from that of Dar-win's? How are these emotions represented in facial expressions? How do we do the labelling (in time and also type -frames, intervals, FACS, dimensional, etc.)? Recently a number of researchers have been arguing that the continuous and dimensional approaches match better with reality these days, but how many people are working on that compared to using simplistic data sets acquired under simplified and controlled conditions (e.g., the Cohn-Kanade or MMI Database)? Emotient's acquisition by Apple coupled with the statement made by Andrew Moore, the dean of computer science at Carnegie Mellon, that 2016 is the year when machines learn to grasp human emotions, should in theory excite all of us researchers that have been working in this challenging field for some time. However, as insiders we are rather apprehensive about this news. Moore's statement regarding the spreading trend across the industry in emotion recognition technology is indeed correct. However his statements about computers doing a better job than humans in accessing emotional states and humanity getting to a stage where we will be having more meaningful dialogue with computers is debatable. Moore is right however, in pointing that emotion recognition technology can be used for many everday applications including mental heath, security, determining patient pain, and tracking how shoppers react to products in stores. Despite the dream described by Moore, the current state of the AFER domain seems to indicate that AFER researchers no longer know what their work is really about. The most prominent researchers in the field appear to be constantly proposing more elaborate and complex machine learning or computer vision approaches, aiming to publish at conferences such as International Conference on Computer Vision (ICCV) or International Conference on Computer Vision and Pattern Recognition (CVPR), losing track of what they are really trying to do. What are AFER researchers really trying to achieve? What is the real research problem in AFER? What is the dream that was/is being sold? A New Age of Expression Recognition: The New Kids on the Block? While attempting to answer the abovementioned questions, we need to keep in mind that since the publication of the PAMI surveys in 2000 [1] and 2009 [2], our understanding of how humans and technology interact has changed considerably as social media and mobile phones have become the predominant ways in which we interact with technology. With the huge increase in mobile phone usage, we interact with technology mostly in dynamic and noisy environments, often while being on the move. This shift from the personal computer to the portable computer has led to a change in the human-computer interaction paradigm. This shift forces us to face the challenging question of whether the visual understanding of human emotions and social behaviour is still the primary modality of interest for researchers in this domain. We already know that not all aspects of emotions can be measured using the same sensors; for instance, the arousal dimension is known to be better communicated with nonvisual signals such as voice or with physiological signals [2]. So, are we as a research community, moving with the shift in people's relationships with technology? Or have we become stuck in solving problems for technologies of a bygone age? Let us look at a prominent application domain that keeps on receiving an ever-growing amount of research funding -health care. With a growing and aging population, there is an increasing demand, as well as political and social pressure to revolutionise health care around the world, particularly in the wealthier countries such as the USA, the UK and Japan. What has the automatic facial expression recognition technology delivered in health care and autism domains to date? Is it convincing to say that the promise has already been delivered by other modalities that (i.e., the new social signals) that we refer to as the new kids on the block. Simple bio signals such as Electrodermal activity (EDA)have been covering much more ground and delivering practical, realistic and life changing solutions (such as early seizure prediction and warning). These coupled with the myriad ways the mobile sensing technology provides (location sensing, accelerometer, heart rate monitoring) readily in our pockets, has revolutionised the way intuitive and ecologically valid sensing can be done and integrated into daily life without the need for the analysis of face and facial expressions. Moving from Vision-Only to Multimodal Emotion Sensing As we already know, different emotions can be better expressed by one modality rather than the other. The most incremental transition from vision-only AFER systems is to include the audio modality. This is particularly needed to correctly analyse and differentiate the facial deformations caused by expressions from facial deformations caused by speech. Some researchers have started to work more in this area but the community effort is still small. In the day to day use of technology, the usage of other alternative sensing modalities such as touch, rgbd, bio signals, and other wearables, has been taking over. However, in the AFER research world we still see the dominance of the vision modality, which is clearly the default choice for people who have dedicated their years and careers working in this domain. We need to then ask ourselves, are other data sets for these new modalities not available for low entry level research? Is the whole community shooting itself in the foot by not enabling more low-entry level research in the areas where progress is really needed? Are we ready to accept that other sensing modalities (e.g. audio, keyboard usage, phone call use, heart rate, GSR acceleration, etc) in fact are acting as a game changer? Educating the Next Generation As mentioned already the old style of AFER using Ekman's six categories is totally out-dated. However, to investigate what the underlying problem is, would require the collection of new data and a new way of thinking about how to label the data. To do that, we need to be training more people who understand the relationship between facial expression and emotions, and affective computing. If we see the number of affective computing or social signal processing courses in the world, we can probably already see the issue. The number of machine learning and computer vision courses are likely to significantly outnumber those. But how can we possibly train people to solve problems when they do not understand what the problem is? What is more, this goes far away from the safety of making simplified assumptions that can be nicely formulated into an easy optimisation problem. Once the notion of ground truth starts changing, who is qualified to help question that and develop and refine that notion so that we can go beyond those killer six categories or the two dimensions of arousal and valence? Moving Deeper into the Wild The current picture shows that majority of the AFER researchers are actually doing computer vision, with the aim of solving the expectations of yesterday. We no longer can define the goal to be facial expression recognition for personalised computing, because computing itself has been transformed. Instead of having a machine that is portable and understands us intimately, i.e., what we are feeling right now, the current problem is understanding the true emotions in the wild in real life contexts. The prevalance of mobile and wearable technology shows that predicting or perceiving our needs is the way to go, i.e. the personal butler/assistant applications such as Google Now -a digital companion that knows all about you, does not share that information with others, and can help facilitate all the needs the user has in life from socio-emotional needs to career ambitions to health. To get to that stage, the idea that a video camera will be pointed towards our face anytime and anywhere is unlikely. Therefore, the biggest question we need to ask ourselves is whether the visual understanding of human emotions and social behaviour is still the primary modality of interest for researchers in this domain. The really fascinating new problems arise when we try to estimate the sentiment of experience in the multisensorial real world of today. 15 years ago, smart phones did not exist. Now they have revolutionised not just how we live but also how we think. The challenge is addressing how we can link the spontaneous behaviour that we exhibit as we navigate through our every day lives and how this relates to real emotions and feelings. How do we label these? Can we rely on clean labels? Probably not. We will end up with a multitude of noisy labels that could be associated with all sorts of activities, embedded in a whole load of short term and long term contexts. This is an extremely challenging problem but one that is interestingly fundamental to computer science, and yet, not sufficiently tackled. Perhaps because of that, we are all looking forward to see what Apple will do with the emotion recognition technology of Emotient. Will Apple be able to use its renown creativity to find the killer app that the AFER field has been waiting for? Or is this yet another hype that will soon pass and leave us AFER researchers to face the questions of tomorrow? We shall wait and see. Acknowledgement Hatice Gunes' work is partially supported the EPSRC under its IDEAS Factory Sandpits call on Digital Personhood (Grant Ref: EP/L00416X/1). Hayley Hung was partially supported by the Dutch national program COMMIT, by the European Commission under contract number FP7-ICT-600877 (SPENCER), and is affiliated with the Delft Data Science consortium.
3,231.2
2016-11-01T00:00:00.000
[ "Computer Science" ]
GOMOS ozone profile validation using ground-based and balloon sonde measurements The validation of ozone profiles retrieved by satellite instruments through comparison with data from groundbased instruments is important to monitor the evolution of the satellite instrument, to assist algorithm development and to allow multi-mission trend analyses. In this study we compare ozone profiles derived from GOMOS night-time observations with measurements from lidar, microwave radiometer and balloon sonde. Collocated pairs are analysed for dependence on several geophysical and instrument observational parameters. Validation results are presented for the operational ESA level 2 data (GOMOS version 5.00) obtained during nearly seven years of observations and a comparison using a smaller dataset from the previous processor (version 4.02) is also included. The profiles obtained from dark limb measurements (solar zenith angle>107) when the provided processing flag is properly considered match the ground-based measurements within ±2 percent over the altitude range 20 to 40 km. Outside this range, the pairs start to deviate more and there is a Correspondence to: J. A. E. van Gijsel<EMAIL_ADDRESS>latitudinal dependence: in the polar region where there is a higher amount of straylight contamination, differences start to occur lower in the mesosphere than in the tropics, whereas for the lower part of the stratosphere the opposite happens: the profiles in the tropics reach less far down as the signal reduces faster because of the higher altitude at which the maximum ozone concentration is found compared to the mid and polar latitudes. Also the bias is shifting from mostly negative in the polar region to more positive in the tropics Profiles measured under “twilight” conditions are often matching the ground-based measurements very well, but care has to be taken in all cases when dealing with “straylight” contaminated profiles. For the selection criteria applied here (data within 800 km, 3 degrees in equivalent latitude, 20 h (5 h above 50 km) and a relative ozone error in the GOMOS data of 20% or less), no dependence was found on stellar magnitude, star temperature, nor the azimuth angle of the line of sight. No evidence of a temporal trend was seen either in the bias or frequency of outliers, but a comparison applying less strict data selection criteria might show differently. Published by Copernicus Publications on behalf of the European Geosciences Union. 10474 J. A. E. van Gijsel et al.: GOMOS ozone profile validation latitudinal dependence: in the polar region where there is a higher amount of straylight contamination, differences start to occur lower in the mesosphere than in the tropics, whereas for the lower part of the stratosphere the opposite happens: the profiles in the tropics reach less far down as the signal reduces faster because of the higher altitude at which the maximum ozone concentration is found compared to the mid and polar latitudes.Also the bias is shifting from mostly negative in the polar region to more positive in the tropics Profiles measured under "twilight" conditions are often matching the ground-based measurements very well, but care has to be taken in all cases when dealing with "straylight" contaminated profiles. For the selection criteria applied here (data within 800 km, 3 degrees in equivalent latitude, 20 h (5 h above 50 km) and a relative ozone error in the GOMOS data of 20% or less), no dependence was found on stellar magnitude, star temperature, nor the azimuth angle of the line of sight.No evidence of a temporal trend was seen either in the bias or frequency of outliers, but a comparison applying less strict data selection criteria might show differently. Published by Copernicus Publications on behalf of the European Geosciences Union. Background Ultraviolet light (UV) present in solar radiation can potentially threaten life on Earth as UV radiation can cause alterations in DNA (Luchnik, 1975).Ozone in the Earth's atmosphere absorbs 97 to 99% of the UV, significantly reducing the harmful effects.Most of these absorption reactions take place in the so-called ozone layer, which is concentrated at altitudes between 15 and 35 km.A reduction of the ozone concentration and associated increase of the ultraviolet radiation is expected to result in a change of plant species composition and a possible reduction of agroecosystem production (Milchunas et al., 2004;Ballare et al., 1996;Koti et al., 2005).Another example from animal experiments suggests that through the increase of UVb radiation (between 280 en 315 nm), for each 1% loss of ozone the incidence of eye cataracts would rise by 0.5% (van der Leun and de Gruijl, 1993) and from epidemiological data an increased incidence of non-melanoma skin cancer by 2% can be expected per percent ozone decrease (Urbach, 1997). The catalytic destruction of ozone by chlorofluoromethanes was first described by Molina and Rowloand (1974).In order to protect life on Earth from the UV, the so-called the Montreal protocol was designed to protect the ozone layer from destruction by ozone depleting substances.Although production of these substances has been significantly reduced, due to their long life-time ozone destruction will still continue for several decades, as can be seen from the appearance of the record-size ozone hole above Antarctica in 2006 (ESA, 2006). The European Space Agency launched the ENVISAT satellite dedicated to environmental research in March 2002.ENVISAT carries three instruments dedicated to atmospheric studies: SCIAMACHY, MIPAS and GOMOS (see http:// envisat.esa.int/instruments/).The main objective of the last instrument is to monitor ozone and its trends in the stratosphere.GOMOS stands for Global Ozone Monitoring by Occultation of Stars and as its name states, this instrument uses stellar occultation to retrieve information on ozone and other trace gases from spectra in the ultraviolet, visible and near-infrared wavelengths.GOMOS is self-calibrating and due to its star tracking capabilities it has a very accurate altitude determination. Approval has recently been given for the continuation of the ENVISAT mission beyond 2010 (its originally planned end of mission year).The current end of the mission is expected no later than August 2014, but the exact date depends on the available amount of fuel (EO-PE (PLSO and MAO teams), 2007).In order for the mission to continue, some orbital changes will take place in October 2010.These changes will reduce the altitude of the platform and reduce the repeat cycle from 35 to 30 days, but no major problems are foreseen for GOMOS acquisitions.However, comparison with long-term validation records is required to monitor the effects of these changes as well as the platform/instrument's ageing and to assess improvements in the GOMOS processing algorithms.In this respect, validation activities are essential to guarantee the stability of the quality of GOMOS and other remote sensor products (Dupuy et al., 2009;Brinksma et al., 2006). Previous validation activities The quality assessment of ozone profiles retrieved from satellite data can be carried out in three different ways: 1) using model studies/climatology; 2) using already validated alternative satellite products or 3) using profiles collected with ground-based/airborne instruments.Bertaux et al. (2004) compared GOMOS ozone profiles of 4 days in 2002 with the Fortuin-Kelder ozone climatology and found an excellent agreement.Differences found were attributed to natural variation and the inclusion of daytime data in the climatology whereas only night-time GO-MOS measurements were taken for the comparison.They also compared two GOMOS measurements at the same location, but from two consecutive orbits and using distinct stars.The observed internal consistency was again referred to as "excellent".Kyrölä et al. (2006) built a climatology from the GO-MOS measurements (prototype processor version 6.0a) consisting of monthly latitudinal distributions of the ozone number density and mixing ratio profiles.The generated stratospheric profiles were compared with the Fortuin-Kelder daytime ozone climatology.Large differences were observed in the polar region which were found to be correlated to large increases of NO 2 .Around the equator GOMOS reported significantly less ozone than the Fortuin-Kelder climatology, but it was mentioned that the Fortuin-Kelder climatology was less reliable in this region due to the low amount of data points used.In the upper stratosphere, ozone values from GOMOS were systematically larger than in the Fortuin-Kelder climatology, which was again attributed to the diurnal variation.In the middle and lower stratosphere, GOMOS reported a few percent less ozone than Fortuin-Kelder.Verronen et al. (2005) compared night-time GOMOS ozone profiles with MIPAS measurements for individual cases as well as profile means for a limited number of profiles (1 day in 2002 and 1 day in 2003).Although MIPAS uses a different measurement technique from GOMOS (MIPAS is a mid-infrared limb sounder), good agreement -within 10-15% -was found for the stratosphere and lower mesosphere.Nevertheless, MIPAS persistently gives a higher estimate in this altitude region.Note that also two processor versions for GOMOS had been used for the different days.Comparing GOMOS version 5.00 ozone profiles to ACE-FTS (version 2.2 ozone update product), median differences between the collocated profiles were within 10% for the altitude range 15 to 40 km (Dupuy et al., 2009). In Meijer et al. (2004), a comparison of approximately 2500 GOMOS version 4.02 ozone profiles using data from lidar, balloon sonde and microwave radiometer data was presented.The authors illustrated that the quality of the GO-MOS profiles strongly depended on the limb illumination conditions.For dark limb measurements, the GOMOS profiles agree well (bias <7.5%) with the collocated data over the altitude range 14 to 64 km.No dependence on star temperature and magnitude or latitude was found, although the observed bias between 35 and 45 km was somewhat larger in the polar regions. The ozone profiles delivered by GOMOS were compared with balloon sonde measurements acquired in 2003 at two locations by Tamminen et al. (2006).Their results indicated that the overall agreement between collocated measurements was good and that small scale structures could be detected with GOMOS' vertical resolution.Explanations for the differences between the two locations were sought in star brightness and strength of the polar vortex.Renard et al. (2008) found an excellent agreement between GOMOS ozone profiles and balloon-borne vertical columns in the middle stratosphere, with an accuracy of 10% for individual profiles. For the tropical zone, several ground-based and satellite measurements including GOMOS have been compared with data from a balloon-based sensor (SAOZ UV-Vis spectrometer using solar occultation) circling the globe in three missions (Borchi and Pommereau, 2007).GOMOS prototype processor version 6.0b performed very well above 22 km (bias of 1-2.5%), but degraded strongly below this altitude.Even though the altitude registration of GOMOS was considered very precise, SAGE II and SAOZ were found to be more precise (in terms of ozone): ∼2% compared to ∼6% for GOMOS above 22 km.Note however that the latitudinal coverage was very limited as well as the number of data samples.Furthermore, it is suggested that remote sensing measurements have a systematic high bias in oceanic convective clouds areas. Also in this region, Mze et al. (2010) compared GOMOS version 5.00 ozone profiles to balloon sondes from eight stations in the SHADOZ network.They found a satisfactory agreement between 21 and 30 km, although site-dependent differences were observed.At lower altitudes, the GOMOS ozone profiles exhibited a large positive bias compared to the balloon sondes. Outline This article can be seen as a continuation of the work presented in Meijer et al. (2004) as the available GOMOS dataset is extended to seven years and a new processor version is available.The following sections will describe the used input data, and the methodology.Section three will present the validation results, in 3.1 the comparison between the previous processor (version 4.02) and the current operational processor (version 5.00) for an overlapping dataset, and in 3.2 the validation results of version 5.00 for the sevenyear-spanning dataset.The conclusions can be found in Sect. 4. GOMOS ozone profiles The GOMOS data used in this study include the operational level 2 data from version 5.00 spanning the period August 2002 to August 2009.We also obtained a dataset processed with the previous algorithm version 4.02 for comparison purposes.This second set contains data from the period June 2004 to January 2005 complemented with a few measurements in August 2005.Note that as the version 4.02 data do not cover the same time period as used in Meijer et al. (2004), the results presented here are not directly comparable.We do not intend to reproduce their results; we merely aim to point out differences between version 4.02 and 5.00 relative to the ground-/balloon-based measurements.Section 2.1.1 describes the implemented changes from the old (4.02) to the current (5.00) processor.All data were restricted to an estimated error in the ozone concentration of 20% or less. The product confidence data (PCD) flags in the GOMOS products indicate the validity of the retrieval of the local density profiles.In addition, the GOMOS ozone profiles receive a quality flag based on the illumination conditions of the atmospheric limb.Five illumination conditions have been characterised: -bright (solar zenith angle at the tangent point smaller than 97 Because of the orbit chosen for ENVISAT, no full-dark measurements can be taken over the Arctic region.Nevertheless, similar to Meijer et al. (2004), an alternative filtering using a solar zenith angle larger than 107 • (astronomical twilight zenith angle) for the tangent points will be used here as well in order to get a picture for this region. Besides the illumination condition, the data quality is also influenced by the characteristics of the observed star.Weak or dim stars have a lower signal-to-noise ratio and therefore a noisier transmission spectrum than strong/bright stars.Furthermore, the star temperature determines the maximum intensity: for hot stars this is in the UV region whereas for cold stars the maximum intensity is in the visual wavelengths and the transmission in the UV part is very noisy.As the UV wavelenghts are used for the retrieval above 40 km, the usability of weak stars is there strongly reduced (Tamminen et al., 2010;European Space Agency, 2007). The product quality disclaimer contains additional recommendentations from the GOMOS quality working group for data selection (European Space Agency, 2006). Changes from version 4.02 to 5.00 In addition to various corrections applied to the level 1 product, several level 2 processor changes have been implemented in IPF 5.00. The atmospheric density profile is no longer retrieved in version 5.00; instead a reference atmospheric density profile is derived from ECMWF data below 1 hPa and the MSIS90 model above.This profile is then subsequently used in the version 5.00 retrieval.This was implemented as in version 4.02 a strong deviation from ECMWF data below 25 km and above 40 km was observed.The retrievals should especially improve at low altitudes where ECMWF data are accurate. Additional errors are reported for ozone, NO 3 and aerosols.A quadratic aerosol law (αλ 2 + βλ + γ , with λ as the wavelength and α, β and γ are altitude dependent and derived from the GOMOS measurements) has been incorporated to describe the wavelength dependence of the aerosol extinction, whereas in version 4.02 an inverse wavelength dependence 1 λ was assumed.A number of different aerosol models has been studied where the quadratic model showed the best performance in comparison to other satellite and ground-based measurements (GOMOS quality working group meeting 15, 2007) and it allows a more realistic description of the aerosol effective cross section than the 1 λ law.A different cross section was introduced for the retrieval of ozone in IPF 5.00.Bogumil et al. (2000) is used for both the UV and the visible wavelengths.More details on the GOMOS processing and introduced changes are given by Bertaux et al. (2010) and in the GOMOS handbook (European Space Agency, 2007). Ground-based measurements The importance of ground-based measurements is slowly getting recognised by initiatives like GAW (Global Atmospheric Watch), Geomon (Global Earth Observation and Monitoring of the atmosphere) and GMES (Global Monitoring for Environment and Security).Despite the fact that these measurements are essential for a global understanding of our climate, securing long-term funding to warrant their continuation is usually rather difficult (Nisbet, 2007).Although satellite observations can complete the picture through the spatial coverage of their measurements, we must ensure a careful validation of the derived information.It is important to realise that satellite-based instruments are complementary to the ground-based observations, as for instance the temporal and vertical resolutions of the last category are often higher and the errors of the products better characterised.Furthermore, the long-term background measurements by ground-based observations are required to overcome data gaps in between satellite missions and to quantify the introduced differences between sequential satellite-based instruments (McDermid et al., 1990;Jégou et al., 2008;Clerbaux et al., 2008). Here we combine sonde, lidar and microwave radiometer data for the validation using the altitude ranges where each instrument has the largest added value and best performance. Stratospheric ozone lidar data In this study we make use of ozone profiles derived from differentially absorbed lidar signals emitted and recorded by stratospheric ozone lidar systems.Two light pulses are simultaneously emitted at different wavelengths with different ozone absorption cross sections.The difference in the returned backscatter can be related directly to the ozone concentration, which is derived as a function of the altitude based on the elapsed time since the pulse emission.The lidars mostly operate under night-time and clear-sky conditions. All of the eleven participating lidars are part of the Network for the Detection of Atmospheric Composition Change (NDACC).The lidar working group of NDACC has developed various protocols to ensure consistency between the different lidars and high data quality is established through intercomparison and validation exercises with models and other instruments (McDermid et al., 1998;NDACC lidar working group, 2009). The lidar network can be considered homogeneous within about 2 percent and, on average, precision of the ozone measurements is around 1% up to 30 km, 2 to 5% at 40 km and 10% at 45 km (Keckhut et al., 2004;Steinbrecht et al., 2009).On average, resolutions range between 1 and 2 km at low altitudes (below 20 km) increasing to 3-5 km at 40 km (Godin et al., 1999). Balloon-borne ozone sonde data Ozone sondes consist of an inert pump, an electrochemical cell facilitating a reaction between ozone and iodide, a detector for the small electric current generated by this reaction, and an interface to a radiosonde which additionally measures air temperature and pressure (Deshler et al., 2008).Data are provided as partial ozone pressure, which have be converted to number density using the air temperature and pressure that were measured simultaneously to ozone by the sonde.Ozone sondes have a precision of about 5% (Smit and Kley, 1998;Thompson et al., 2003a;Deshler et al., 2008). In this study, balloon soundings has been used from the Ground-Based Measurement and Campaign Database (GBMCD) subgroup of the Atmospheric Chemistry and Validation Team (ACVT) with the addition of Southern Hemisphere Additional Ozonesondes (SHADOZ, see Thompson et al. (2003aThompson et al. ( , b, 2007) ) for a description of this initiative) to increase the coverage in the tropics. Data from the SHADOZ sondes are re-binned to longer time intervals using a block average for a given time window size (e.g. 10 s).In order to deal with the non-linear behaviour of pressure with increasing altitude, the logarithm of the reported pressures was taken before averaging, followed by taking the inverse logarithm of this average to normalise. All sonde data has been cut off at an altitude of 30 km and averaged over two kilometer (corresponding to the GOMOS resolution below 35 km) to avoid the introduction of local biases caused by the presence of small scale structures seen by the sonde which would mainly enlarge the standard deviation of the differences.This 2-km averaging was done using a running mean. Microwave radiometer data As a third validation instrument we have used data from microwave radiometers.These instruments are often operated continuously during both day and night time.Although they have a broad vertical resolution, the data are useful to study the stratosphere and especially the mesosphere where lidar data are no longer available. The vertical resolution (defined as the full width to half maximum of the averaging kernels) is in the range 6 to 10 km between 20 and 50 km and about 13 km at 64 km (Boyd et al., 2007;Hocke et al., 2007).Precision is typically about 5% between 20 and 55 km and increases above (7% at 64 km).Compared to the ozone profiles provided by AURA microwave limb sounder (version 2.2), agreement with two NDACC microwave radiometers was within 5% (Boyd et al., 2007). Data are restricted in this study to altitudes ranging between 30 and 70 km with the condition that the reported error cannot exceed 30%. Equivalent latitude data Potential vorticity (PV) data on the 475 K potential temperature field were obtained from the ECMWF interim reanalysis (ERA-interim) data archive.Since it has been noted that the position of the vortex boundary derived from potential vorticity data may differ from that seen in observations (Greenblatt et al., 2002;Müller and Günther, 2003), which has been attributed to the availability of input data for the calculation of PV, it was decided not to interpolate the PV spatially and temporally nor to derive the vortex position.Instead, equivalent latitudes were derived for all GOMOS 5.00 data as well as for the ground-based measurements and data were linked to the nearest grid cell (cell size of 1.5 • ) and closest time (PV data are computed for 8 h intervals).Subsequently, the relative equivalent latitude difference between the GOMOS and ground-based measurements was used to study the effect on the validation results. Collocations and data treatment Following Meijer et al. (2004), we have restricted all collocations to a maximum horizontal distance of 800 km and a maximum time difference of 20 h between measurements.For the full dataset comparison in Sect.3.2, we also enforce a maximum difference in equivalent latitude of 3 degrees to avoid problems in the polar region related to observing different air masses.Above altitudes of 50 km, the maximum time difference is set to 5 h and the daylight conditions have to be the same, as mesospheric ozone is subject to diurnal variation. Both the validation and GOMOS datasets have been interpolated using a nearly linear spline to a common (200 m) altitude grid.As described before, the sonde data are averaged to the GOMOS resolution using a running mean.Differences in vertical resolution are not taken into account for the lidar data, because the effect is considered relatively small given the similar resolution of GOMOS.If we are to apply the averaging kernels and consider the a-priori information from the microwave radiometer data, the GOMOS data will be degraded and no longer independent from the microwave radiometer data (Meijer et al., 2003).The effect of not taking this resolution difference into account should lead to an increased standard deviation of the differences between GOMOS and the microwave retrievals.Substantial differences would be expected at altitude regions where there are small scale features, which is less likely above 30 km.Here we have smoothened the GOMOS data that collocates with microwave radiometer measurements using a running mean of 10 km as an average microwave radiometer resolution at 50 km (middle of the range used for the validation).Note that however no large effects were observed when completely disregarding the differences in resolution.validation data in blue (mean and standard deviation in thick and thin lines respectively).The 5 ozone concentration is plotted on a log-scale for the upper 30 km.The middle panels show the 6 difference between GOMOS and the validation data (with respect to the validation data) in 7 percentage as a function of altitude.The green line shows the median difference profile, the 8 black lines the mean (thick black line) plus/minus 1 standard deviation (thin lines) and the 9 grey lines show the mean plus/minus 2 standard errors.On the right side of the middle panel 10 the number of collocated pairs is shown, with the total number of used pairs at the bottom of 11 Fig. 1.GOMOS 4.02 (top) and 5.00 (bottom) versus validation data.Left panels show the ozone number density as a function of altitude, with the GOMOS profiles in red and the validation data in blue (mean and standard deviation in thick and thin lines respectively).The ozone concentration is plotted on a log-scale for the upper 30 km.The middle panels show the difference between GOMOS and the validation data (with respect to the validation data) in percentage as a function of altitude.The green line shows the median difference profile, the black lines the mean (thick black line) plus/minus 1 standard deviation (thin lines) and the grey lines show the mean plus/minus 2 standard errors.On the right side of the middle panel the number of collocated pairs is shown, with the total number of used pairs at the bottom of the plot.The right panel shows the median difference (thick black line) together with the 16 and 84 percentiles (dark grey lines) and the 2.5 and 97.5 (light grey lines) percentiles. A complete validation should also consider the provided estimates of error in the ozone retrievals.In this study we have only used the provided errors in the validation and GO-MOS data in the data selection process as for GOMOS the estimated error is a subject of discussion in the quality working group (e.g. the scintillation correction is still an issue; Sofieva et al., 2009) and errors in the validation data are often not reported (sonde) or non-homogeneous (e.g.different definitions used in the lidar community).As a consequence, a full study could be dedicated to the comparison of errors and their uncertainties.We believe that through the large numbers used in the analyses, these complications are dealt with in a different way as the error in the data should correspond to the spreading seen in a dataset for a large population.The improved error estimates in the next GOMOS processor ver-sion (IPF 6) are described in Tamminen et al. (2010) and suggestions for further improvements are given in Sofieva et al. (2010).data is shown on a log-scale from 50 km upward to enhance visibility.The middle plots show the difference between GO-MOS and the validation, where the difference is calculated as: GOMOS−VALID VALID × 100.The green line shows the median difference, the thick black line corresponds to the mean difference, the thin black lines illustrate the mean ±1 standard deviation and the thin grey lines show the mean ± 2 standard errors.The number of used collocated pairs for a given altitude is shown on the right side of the middle panel, whereas the total number of collocated pairs is shown at the bottom of the panel.The right panel shows the following quantiles of the differences (lines from left to right): 2.5%, 16%, 50% (median), 84% and 97.5%. Comparison between versions 4.02 and 5.00 The differences between the two analyses in total pairs and the collocated pairs for some altitudes originate from the difference in assigned errors to the datasets.In general more data points in version 5.00 fulfil the criterion of a maximum error of 20%.Few outstanding differences between the two versions can be observed in the median profiles.The small negative bias from 20 to 50 km has shifted positively.With both versions, the standard deviation increases substantially below 30 km due to the presence of some outlier profiles.A large part of the deviation between the mean and median differences between 24 and 30 km can be attributed to comparisons with Dumont d'Urville (66.7 • S), Thule (76.5 • N) and Legionowo (52.4 • N) soundings.A closer investigation at the latter two sites pointed out that some of these observations include straylight contamination.At Dumont d'Urville however, the illumination condition is not the only factor involved, as fully dark observations still produce outlier ozone concentrations compared to the soundings.This can be attributed to the increasing spatial variability in this area as time progresses, given the fact that the June and July comparisons show good results.As ozone depletion can start already in mid-winter at the latitude of Dumont d'Urville (Roscoe et al., 1997), differences with measurements at other latitudes are likely to be found, which is what we observe in this case -with the relatively large distance between the (fully dark) satellite and sonde measurements.In addition, small scale structures www.atmos-chem-phys.net/10/10473/2010/Atmos.Chem.Phys., 10, 10473-10488, 2010 are difficult to follow with GOMOS' resolution.As spring advances, so does the ozone hole formation whereas the illumination conditions for GOMOS observations get worse.As a result, most collocations are with lower latitude measurements, which have a very different ozone distribution in this period.One future solution would be to optimise the collocation criteria and make them dependent on latitude and/or time of the year. Figure 1 (see the GOMOS standard deviation in left panel) also shows that a few additional outlier profiles are produced with version 5.00 around the ozone maximum.These can be filtered out by removing unrealistic profiles exceeding a concentration of 10 13 molecules per cm 3 .Note that differences with the comparison carried out by Meijer et al. (2004) at the higher part of the profile (above 45 km where only microwave data are available for comparison) in version 4.02 are caused by a difference in the time span of the datasets of Meijer et al. and our datasets: the current analysis only covers data from 2004 and 2005, resulting in fewer collocations with microwave radiometers and at fewer sites (e.g.no data is available for Lauder and Mauna Loa).In fact, the majority of these collocations are found at Payerne (80 to 100% depending on the altitude), making the top of the plot a (rather) local instead of global picture. Figure 2 shows the same picture as Fig. 1 but with the outlier profiles removed as described above.The median difference profiles are, as expected, virtually the same.The mean now follows the median from an altitude of about 20 to around 60 km.Outside this range we still detect outliers due a low signal to noise ratio and increased scintillation (low altitudes), whereas we will investigate with the longer and larger v5.00 dataset if the observed behaviour at higher altitudes is also seen at other locations. Validation of the GOMOS v5.00 ozone profiles In this section we present the validation results for all seven years.Note that more collocations are found in early years where funding was available for additional validation measurements, and secondly, GOMOS had a larger spatial coverage in the beginning as it could use a larger azimuth range for the line of sight. We have split the main dataset into various subsets to identify possible dependencies on observation characteristics.Table 1 gives an overview of the used ranges for these parameters and Fig. 3 shows the locations of the GOMOS data together with the validation sites. Illumination condition Figure 4 shows the quality of the observations as a function of the illumination condition.The bright limb cases are presented on the left panel, showing that the retrieval with the current processor is still insufficient for these cases.At high altitudes there is a large negative bias and below 35 km the profiles contain many extreme values. Under twilight conditions (middle panel), the results look a lot better.Compared to the full-dark limb cases (right panel), there are more high outliers, but a substantial amount of data can be used. In our "dark" selection (solar zenith angle >107 • ), a part of the data has limb illumination flags (see Sect. 2.1) indicating twilight and/or straylight contamination (flags equal to 2, 3 or 4) of the profiles.These 'light-contaminated' data have been compared to those flagged 'dark' (flag equal to 0) in the latitude region 40 • N to 50 • N.This region was chosen to avoid a potential latitude bias, no dark flagged collocations are found above 55 • N and insufficient pairs were found located on the southern hemisphere.The profiles that are flagged to be light-contaminated give overall more negative differences than those flagged "dark", but these differences are not significant.However, note that the "dark" flagged cases consisted of 70 collocations at a given altitude at most; when more data become available with time, the differences might turn out to be significant. Stellar properties Observations of strong stars should result in profiles of higher quality as the signal is less noisy.Indeed the 16% and 84% quantiles (Fig. 5) show a narrower distribution over a large part of the altitude range.However, the 97.5% quantile shows the presence of some high-value outliers.The number of collocations with strong stars is low in comparison to the weak stars-cases, making the difference profiles more variable.At altitudes above 45 km, the majority of the collocations are in the polar region (Ny Ålesund microwave radiometer), whereas for the weak star observations most of the collocations are located in the mid-latitude region.This difference (thick black line) between GOMOS and the validation data together with the 16 5 and 84 percentiles (thin dark grey lines) and the 2.5 and 97.5 percentiles (thin light grey 6 lines).On the right side of each panel is the number of collocated pairs used for the 7 corresponding altitude.8 Fig. 4. Validation results for the different limb illumination conditions.Left panel: bright limb; middle panel: twilight limb; right panel: dark limb cases.All plots show the median difference (thick black line) between GOMOS and the validation data together with the 16 and 84 percentiles (thin dark grey lines) and the 2.5 and 97.5 percentiles (thin light grey lines).On the right side of each panel is the number of collocated pairs used for the corresponding altitude.within 800 km and 20 h × ≤800 km and t≤20 h within 400 km and 10 h × ≤400 km and t≤10 h within 200 km and 5 h × ≤200 km and t ≤5 h explains why the difference profiles for the top appear worse for the strong star cases -when we consider only the polar cases, there is almost no difference between the two star magnitude groups. With respect to the temperature of the observed stars, fewer collocations with cold stars are available than with hot stars, especially in the mesosphere where all collocations are with weak stars.The combination of weak and cold stars complicates the retrieval (Kyrölä et al., 2010a, b) which results in a higher error estimate.This is reflected in the decreasing amount of available collocations with altitude as we filter on a maximum error of 20%.No significant influence of the star's temperature on the results is then observed.How-ever, if we increase the maximum permitted error for GO-MOS to 100%, we see an increase in the number of available profiles, but the higher half of the profile (roughly above 40 km) shows a strongly increased variability and the median differences enhance with respect to the cases shown in Fig. 6 (e.g. at 55 km, the data have a negative bias of 50% and at 70 km the bias equals about 30% -not shown).Note that the mentioned data are not flagged invalid. Line of sight azimuth angle Figure 7 shows the influence of the line of sight (LOS) azimuth angle during the time of observation.Most observations are found to be in slant viewing and quite a few (given the smaller azimuth range) are in the back LOS.The median difference profiles are very similar, but fewer outliers are observed in the back LOS configuration.In contrast to Meijer et al. (2004), an increased standard deviation is not (any longer) seen for the side LOS data. GOMOS is currently (September 2010) operating in the range 17 • to 47 • , which corresponds mostly to the slant LOS.The past ranges are listed in the GOMOS monthly status reports, see http://earth.esa.int/pcs/envisat/gomos/reports/monthly. Geographical area For the analysis shown in Fig. 8 the dataset has been split into three geographical regions.Most collocations are found in the mid-latitude region (right panel), where the majority of the validation stations is located.In the polar region (left panel) there are also many collocations: even though there are fewer stations, there are many GOMOS overpasses given the orbit of ENVISAT.This leads to various GOMOS measurements collocating with a single ground-based measurement.The collocating microwave data are from two stations: Ny Ålesund (largest contribution) and Kiruna (3 profiles above 55 km).The GOMOS profiles increasingly start to overestimate the ozone concentration above 50 km, which is likely an effect of the increasing uncertainties in the microwave radiometer data and the increasing straylight contamination.Perhaps the processor that is under development in the GOMOS bright limb project will improve the ozone retrieval as it does not depend on the weak star signals.In comparison to the other regions, the bias is more negative between 18 and 30 km, reaching up to −8%.As indicated in Sect.3.2.1, it is possible that (part of) this more negative bias originates from twilight/straylight contamination of the profiles.A larger dataset is required to be conclusive. In the tropical region (right panel), fewest collocations are available.The effect of decreasing signal after having descended below the ozone maximum (which is at a higher altitude in the tropics) is clearly illustrated, as the variation increases with decreasing altitude.Likewise, the median shows an offset from the 0% difference in the tropics before that happens in the other areas. Collocation criteria Figure 9 confirms that the chosen collocation criteria are not introducing any biases.In fact, we could consider increasing the allowed difference in equivalent latitude, as we saw for subsets of the data that no clear deterioration was found when changing from 3 to 5 or 10 degrees.A more elaborated study focussing on the polar area is to be carried out in the future. Also, no evidence of a trend was observed when grouping the data by year (not shown), but perhaps this is masked by the flagging and or the chosen error regime. Conclusions Ground-/balloon based instruments can be used to bridge the gap between different satellite instruments, both in terms of technique and time.The ground-based observations often provide a long-term monitoring record with a high vertical resolution at a single location, whereas the satellite measurements are complementary as they can provide a global coverage with a limited life span.The comparison between data from satellite and ground-based instruments is a necessity to validate the retrievals and to monitor the performance of the instruments (Froidevaux et al., 2008;Hocke et al., 2007;Nardi et al., 2008;Jégou et al., 2008).The suite of groundbased and satellite retrievals together with models furthermore provides a unique tool to study atmospheric events and to detect trends (Ladstätter-Weißenmayer et al., 2007;Steinbrecht et al., 2006;Steinbrecht et al., 2009). In this study we first have compared the ozone profiles from the current operational processor (version 5.00) with the previous version (4.02) by matching the datasets with ground and balloon based measurements.The validation results indicate that the two processing algorithms produce very similar results.The bias has improved in some areas, but a few more outliers are encountered.It was shown that some of the outlying data points can be removed by filtering the profiles on negative and exceptionally large values.Improved quality flagging in future processor versions may overcome this problem. Additionally, we have compared seven years of version 5.00 GOMOS ozone profiles with balloon sonde, lidar and microwave radiometer ozone measurements.Data were collocated using a maximum difference of 800 km, 3 degrees in equivalent latitude and 20 h in time (5 h above an altitude of 50 km).Lidar and microwave radiometer data were restricted to a maximum uncertainty of 30%, while the GOMOS profiles were filtered to exclude measurement points with an er-ror greater than 20% and reporting ozone number densities below 0 or above 10 13 molecules/cm 3 .For the dark limb observations, this resulted in 1897 collocated pairs with balloon soundings, 576 collocations with lidar observations and 587 collocations with microwave radiometer data. The comparison shows that GOMOS profiles obtained from dark limb measurements are found to be of a high quality when the provided processing quality flag is properly taken into account.Profiles measured under twilight conditions are of similar quality as dark limb measurements.However, the occurrence of outliers is higher.Care has to be taken in all cases when dealing with straylight contaminated profiles, which especially affect higher altitudes in the polar region.Also in the mid-latitudes we can observe deviations from the validation data in the mesosphere.In the tropics there is a better match in the mesosphere between the validation instruments and the GOMOS measurements, but some large outliers are present.Overall, the ozone profiles are most similar (within a few percent) in the range 20 to 40 km, where the bias is moving towards the positive and the lowest good retrieval altitude increases when going from the poles to the equator. Theoretically, observations of strong stars (visual magnitude ranging between −2 and 1) should result in profiles of have a higher quality (less noise) than observations of weak stars (magnitude between 1 and 4).However, for the GO-MOS data within the selected error range (0-20%), we did not see any clear distinction between these two groups, but possibly that is related to the selection criteria applied here.The same is valid for the distinction between hot and cold stars.For instance, when extending the allowed error range to 100%, we see a large increase of the bias for the profiles obtained with cold stars.Atmos.Chem. Phys., 10, 10473-10488, 2010 www.atmos-chem-phys.net/10/10473/2010/Comparing the different azimuth ranges for the line of sight (LOS), we can conclude that the median difference profiles are very similar and the smallest amount of outliers is observed using the back LOS configuration. No evidence of a temporal trend was seen in the bias or occurrence of outliers, but it is likely that more profiles are rejected as the instrument ages.An analysis using a less strict data selection might be used to prove this.The next GOMOS processor version is expected to better deal with the increased dark charge of the detectors, reducing the amount of outliers and thus increasing the overall profile quality. 2 Figure 1 : Figure 1: GOMOS 4.02 (top) and 5.00 (bottom) versus validation data.Left panels show the 3 ozone number density as a function of altitude, with the GOMOS profiles in red and the 4 Figure 1 Figure 2 :Fig. 2 . Figure1shows the comparison between GOMOS versions 4.02 (top) and 5.00 (bottom) with the validation data (VALID).The left panels of both plots show the mean ozone profiles (thick lines) as a function of altitude together with the corresponding standard deviations (thin lines) for GO-MOS (in red) and the validation data (in blue).The ozone Figure 3 : 6 Fig. 3 . Figure 3: Global overview of collocated measurements available in this study.GOMOS 3 measurements in black (dark limb observations), dark grey (twilight conditions) and light 4 grey (bright limb observations) circles together with the validation sites plotted as blue 5 asterisks.6 Fig. 3. Global overview of collocated measurements available in this study.GOMOS measurements in black (dark limb observations), dark grey (twilight conditions) and light grey (bright limb observations) circles together with the validation sites plotted as blue asterisks. 2 Figure 4 : Figure 4: Validation results for the different limb illumination conditions.Left panel: bright 3 limb; middle panel: twilight limb; right panel: dark limb cases.All plots show the median 4 2 Figure 9 : 6 Fig. 9 . Figure 9: As Figure 4, here showing the effect of making the collocation criteria stricter.Left 3 panel: cases with a maximum difference of 800 km and 20 hours; middle panel: 400 km and 4 10 hours maximum difference; right panel: cases fulfilling a 200 km and 5 hours maximum 5 difference.6 Table 1 . Overview of analysed data subsets per parameter.
10,005.8
2010-11-08T00:00:00.000
[ "Environmental Science", "Physics" ]
A Common Fixed Point Theorem for Weakly Compatible Multi-Valued Mappings Satisfying Strongly Tangential Property The concept of compatibility was been introduced and used by G. Jungck [8] to prove the existence of a common fixed point, this notion generalizes the weakly commuting, further there are various type of compatibility, compatibility of type (A),of type (B),of type (C) and of type (P) for two self mappings f and g of metric space (X, d) was introduced respectively in [10], [18],[17] and [16] as follows: the pair{f, g} is compatible of type (A) if Introduction The concept of compatibility was been introduced and used by G. Jungck [8] to prove the existence of a common fixed point, this notion generalizes the weakly commuting, further there are various type of compatibility, compatibility of type (A),of type (B),of type (C) and of type (P) for two self mappings f and g of metric space (X, d) was introduced respectively in [10], [18], [17] and [16] as follows: the pair{f, g} is compatible of type (A) if lim n→∞ d(f gx n , g 2 x n ) = 0 and lim n→∞ d(gf x n , f 2 x n ) = 0, f and g are compatible of type (B) if In 1996 , Jungck [11] introduced a concept which generalizes the all above type of compatibility and it is weaker than them: two self mappings of metric space (X, d) into itself are to be weakly compatible if they are commute at their coincidence points, i.e if f u = gu for some u ∈ X, then f gu = gf u.Let (X, d) be a metric space, CB(X) is the set of all non-empty bounded closed subsets of X.For all A, B ∈ CB(X) the metric of Hausdorff defined by: For all a ∈ A, we have d(a, B) ≤ H(A, B). Preliminaries H. Kaneko and S. Sessa [13] extended the concept of compatibility to the setting of single and set-valued maps as follows: Let f : X → X and S : X → CB(X) two single and set-valued mappings, the pair {f, S} is to be compatible if for all x ∈ X, f Sx ∈ CB(X) and Jungck and Rhoades [8] generalised the concept of weak compatibility to setting of single and set valued mappings: Definition 2.1.Two single mapping f : X → X and set valued mapping S : X → CB(X) of metric space (X, d) are said to be weakly compatible if they commute at their coincidence point, i.e if f u ∈ Su for some u ∈ X, then f Su = Sf u. Recently, Al-Thagafi and Shahzad [4] introduced the notion of occasionally weakly compatible maps in metric spaces: Two self mappings f and g of a metric space (X, d) are to be occasionally weakly compatible (owc) if and only if there is a point u ∈ X such that f u = gu and f gu = gf u. Notice that the weak compatibility implies occasional weakly compatibility, the converse may be not.Later, Abbas and Rhoades [1] extended the occasionally weakly compatible mappings to the setting of single and set-valued mappings: Definition 2.2.Two mappings f : X → X and S : X → CB(X) are said to be owc if and only if there exists some point u in X such that f u ∈ Su and f Sz ⊆ Sf z. Example 2.2.Let X = [0, ∞) and d is the euclidian metric, we define f and S as follows: , then f and S are owc. Pathak and Shahzad [16] introduced the concept of tangential property as follows: Let f, g : X → X two self mappings of metric space (X, d), a point z ∈ X is said to be a weak tangent point to (f, g) if there exist two sequences In 2011, W. Sintunavarat and P. Kumam [25] extended the last notion for single and multi valued maps: Definition 2.3.Let f, g : X → X be single mappings and S, T : X → B(X) two multi-valued mappings on metric space (X, d), the pair {f, g} is said to be tangential with respect to {S, T } if there exists two sequences {x n }, {y n } in X such that S. Chauhan, M. Imdad, E. Karapinar and B. Fisher [6] introduced a generalization to the last notion by adding another condition as follows: Definition 2.4.Let f, g : X → X be single valued mappings and S, T : X → CB(X) two multi-valued mappings on metric space (X, d), the pair {f, g} is said to be strongly tangential with respect to {S, T } if there exists two sequences {x n }, {y n } in X such that Example 2.3.Let ([0, 4] and d the euclidian metric, we define f, g, S and T by: We have f (X) = [1, 3] and g(X) = [0, 5], then f (X) ∩ g(X) = [1,3].Consider two sequences {x n }, {y n } which defined for all n ≥ 1 by: Clearly that , then {f, g} is strongly tangential with respect to {S, T }. If in Definition 2.4 we have S = T and f = g we get to the following definition: Definition 2.5.Let f : X → X and S : X → B(X) two mappings on metric space (X, d), f is said to be strongly tangential with respect to S if Example 2.4.Let ([0, 2] with the euclidian metric, f and S defined by: Consider two sequence {x n }, {y n } which defined for all n ≥ 1 by: x n = 1 n , y n = 1 + 1 n , we have: and Let Φ be the set of all upper semi continuous functions φ : R 5 + → R + satisfying the conditions: (φ 1 ): φ is non decreasing in each coordinate variable.(φ 2 ): For any t > 0, ψ(t) = max φ(0, t, 0, 0, t), φ(0, 0, t, t, 0), φ(t, 0, 0, t, t) < t. The aim of this paper is to prove the existence of a common fixed point for weakly compatible single and set valued mappings in metric space, which satisfying a contractive condition of integral type by using the strongly tangential property, our results generalize and extend some previous results. Main results Theorem 3.1.Let f, g : X → X, be single valued mappings and S, T : X → CB(X) multi-valued mappings of metric space (X, d) such for all x, y in X we have: (1) where φ ∈ Φ and ϕ : R + → R + is a Lebesgue-integrable function which is summable on each compact subset of R + , non-negative, and such that for each ε > 0, ε 0 ϕ(t)dt > 0. Suppose that the two pairs {f, S}, {g, T } are weakly compatible and {f, g} is strongly tangential with respect to {S, T }, then f, g, S and T have a unique common fixed point in X. Proof.Suppose {f, g} is strongly tangential with respect to {S, T }, then there exists two sequences {x n }, {y n } such and z ∈ f (X) ∩ g(X), then there exists u, v ∈ X such z = f u = gv, now we claim z ∈ Su, if not by using (1) we get which is a contradiction with (φ 2 ), then d(z, Su) = 0 and so z ∈ Su. We claim z = gv ∈ T v, if not and using (1) we get: which is a contradiction, then z ∈ T v. Since {f, S} is weakly compatible and f u ∈ Su, then f Su = Sf u and so f z ∈ Sz, as well as {g, T } we obtain gz ∈ T z.Now, we claim z = f z, if not by using (1) we get: which is a contradiction, then z = f z ∈ Sz which implies that z is common fixed point of f and S. Similarly, we claim z = gz, if not by using (1) we get: which is a contradicts (φ 2 ), then z = gz ∈ T z, consequently z is common fixed point of f, g, S and T . For the uniqueness, suppose there is an other point w satisfying w = f w = gw ∈ Sw = T , if w = z by using (1) we get: which is a contradiction, then z = w. If S = T and f = g, we obtain the following corollary: Corollary 3.1.Let f : X → X, and S : X → CB(X) be single and set valued mappings of metric space (X, d) such: where φ ∈ Φ and ϕ : R + → R + is a Lebesgue-integrable function which is summable on each compact subset of R + , non-negative, and such that for each ε > 0, ε 0 ϕ(t)dt > 0, if f is strongly tangential with respect to S and {f, S} is weakly compatible, then f and S have a unique common fixed point.Corollary 3.2.Let f, g : X → X, and S, T : X → CB(X) be single and set valued mappings of metric space (X, d) such: a∈A d(a, B), sup b∈B d(b, A) , where d(a, B) = inf b∈B d(a, b) and (CB(X), H) is a metric space. ) d t , letting n → ∞, since d(z, f z) ≤ H(M, Sz) and applying the triangle inequality we get d(f z, M ) ≤ d(f z, z) + d(z, M ) = d(f z, z), then: , where φ ∈ Φ and ϕ : R + → R + is a Lebesgue-integrable function which is summable on each compact subset of R + , non-negative, and such that for each ε > 0, ε 0 ϕ(t) d t > 0. and a, b, c are nonnegative real numbers such a + b + b < 1 and p ∈ N , if {f, g} is strongly tangential with respect to {S, T } and the two pairs {f, g}, {S, T } are weakly compatible, then f, g, S and T have a unique common fixed point.
2,429
2014-01-01T00:00:00.000
[ "Mathematics" ]
Re-theorising mobility and the formation of culture and language among the Corded Ware Culture in Europe Abstract Recent genetic, isotopic and linguistic research has dramatically changed our understanding of how the Corded Ware Culture in Europe was formed. Here the authors explain it in terms of local adaptations and interactions between migrant Yamnaya people from the Pontic-Caspian steppe and indigenous North European Neolithic cultures. The original herding economy of the Yamnaya migrants gradually gave way to new practices of crop cultivation, which led to the adoption of new words for those crops. The result of this hybridisation process was the formation of a new material culture, the Corded Ware Culture, and of a new dialect, Proto-Germanic. Despite a degree of hostility between expanding Corded Ware groups and indigenous Neolithic groups, stable isotope data suggest that exogamy provided a mechanism facilitating their integration. This article should be read in conjunction with that by Heyd (2017, in this issue). Introduction With recent results from ancient DNA research showing an extensive incoming gene flow into Europe shortly after 3000 BC Haak et al. 2015), we are finally in a position where migrations can be documented rather than debated as an element in the formation of the Corded Ware Culture. This has lifted an interpretative burden from archaeology in much the same way as did 14 C dating when it was introduced. The new 'freedom' can instead be invested in properly theorising and interpreting local processes of migration, integration and consolidation, which represent an underdeveloped field of research. By integrating recent results from genetics, stable isotopes, archaeology and historical linguistics, this will in turn allow us to formulate better-founded models for the interaction of intruding and settled groups and the formation of a new material culture, and consequently better models for language dispersal and language change. Re-theorising migrations The evidence from the recent studies of ancient DNA documenting human migrations into temperate Europe during the early third millennium BC can be summarised as follows: r There was a widespread process of genetic admixture, leading to a reduction of Neolithic DNA in temperate Europe and the dramatic increase of a new genomic component that was only marginally present in Central Europe prior to 3000 BC Haak et al. 2015;Cassidy et al. 2016). r Although the details of this admixture event can and will be debated for years to come (Vander Linden 2016), it remains beyond question that the observed change in the gene pool must have involved the migration of people. Moreover, the apparent abruptness with which this change occurred suggests that it was a large-scale migration event, rather than a slow periodic gene flow across many centuries. r The Yamnaya people from the Pontic-Caspian steppe are the best-known proxy for this incoming gene flow. The exact source could have been another, yet unsampled, group of people, but, in that case, they must have been very closely related genetically to Yamnaya. What facilitated this major demographic event remains open to speculation, but the late fourth millennium BC was a period of widespread technological innovation, which also introduced long-distance travel. This horizon could thus have formed a prelude to the Yamnaya migrations, opening up new corridors of cultural transmission on which subsequent developments depended (Johannsen & Laursen 2010;Hansen 2011Hansen , 2014. Moreover, a decline in Neolithic activity around 3000 BC (Hinz et al. 2012;Shennan et al. 2013) could indicate a crisis in Neolithic societies, thereby allowing space for incoming migrants. In that light, the recent documentation of an early form of plague, widespread from Siberia to the Baltic in the early third millennium BC, could play a key role in explaining this genetic changeover . These extensive demographic changes led to the formation of a new social and economic order in large parts of temperate Europe, resulting in the formation of the Corded Ware Culture. As evident from, for example, western Jutland (Andersen 1993;Kristiansen 2007), Corded Ware people burned down forests on a massive scale, thereby creating open, steppelike grazing lands for their herds. A more gradual opening of the landscape is also found in other regions (Doppler et al. 2015), while subsistence seems to have been a variable mix of cultivation, husbandry and some hunting and gathering (Müller et al. 2009). The Corded Ware Culture elected the battle-axe as the most prominent male symbol, and created new types of pottery. Among the Yamnaya cultures, the tradition of potterymaking was weakly developed. Being a pastoral, mobile economy, they instead employed containers made of leather, wood and bast, and woven vessels could have been used as well, just as they used mats and other lightweight materials that were easy to transport in their wagons (Shishlina 2008: 60, fig. 54). The Corded Ware Culture had widely shared similarities in burial rituals over vast distances (Furholt 2014: fig. 7), and had strong affinities to the Yamnaya burial rituals known from the steppe. The tens of thousands of small, single-grave barrows in Northern Europe were aligned in rows across the landscape, in a similar way to the practice on the steppe. They formed visible lines of communication in these vast open environments (Hübner 2005;Bourgeois 2013). The unifying element between Yamnaya and Corded Ware is the burial ritual of a single inhumation under a barrow, even though there are minor differences in grave goods and the positioning of the body. Burial rituals are among the most fundamental social institutions in any society, as they relate to the transmission of property and power at death, to cosmology and religion, and, in the case of settlements, to the way households are organised. We can therefore formulate this as an axiom: A strong relationship exists between burial ritual and social and religious institutions, because a burial is the institutionalised occasion for the transmission of property and power, and the renewal of social and economic ties (Oestigaard & Goldhahn 2006). A radical change in burial rites therefore signals a similar change in beliefs and institutions. If such a change occurs rapidly without transition it signals a transformation of society, often under strong external influence, possibly a migration (to be supported also by settlement change and economic change). This does not rule out the effects of internal contradictions, which, however, often go hand in hand with external forces of change. By contrast, the portable material culture is more readily prone to change, as it relates to various forms of personal identity and group identities that may fluctuate in time and space, as demonstrated by Hodder (1982). The institutions of burial ritual and of households or settlements should therefore form the core in comparative analyses of mobility and migrations because they do not change easily (see too Kristiansen 1989;Burmeister 2000;Prien 2005). Lack of a theoretical understanding of social institutions, and their differing role in organising material culture, goes a long way to explaining the many failed attempts to come to terms with migrations in the archaeological record, as well as with various forms of ethnicity (e.g. Andresen 2004;Brather 2004). Portable material culture can be used selectively to identify mobility, depending on its social and personal role. Thus, fibulae and pins are carried as part of dress, and may be helpful in defining the movements of individuals. Most previous research on the Corded Ware Culture has used portable material culture, especially pottery, as a diagnostic feature, and therefore stressed differences between Corded Ware and Yamnaya steppe cultures while overlooking more basic similarities. We therefore need to re-analyse the (possible) social processes from initial Yamnaya migration to settling down and forming a new material culture: Corded Ware (Furholt 2014). The formation of the Yamnaya and Corded Ware Cultures Given that widespread migrations from the Caspian-Pontic steppe now constitute a key element in any plausible explanation of the Corded Ware phenomenon, we need to focus attention on how it unfolded locally in order to understand processes of demographic and cultural dominance. But we also need to understand the economic and social system of the Yamnaya culture from which it originated, and which flourished and expanded from around 3000 BC (Anthony 2007). First, it should be noted that the Yamnaya cultures of the Pontic and Caspian steppe represented the first development of a fully pastoral economy that exploited different ecological niches during small-scale seasonal movements of people and animals between summer and winter grazing. Its economy and variability has been analysed in detail by Natalia Shishlina in a classic work (Shishlina 2008). This variability is also reflected in diet (Shishlina et al. 2012). The period after 3000 BC saw a more humid climate that favoured grassland productivity, and thus the new pastoral economy experienced a rapid demographic expansion that included eastern Central Europe: Bulgaria, Hungary and Transylvania, and Northern Europe soon afterwards (Harrison & Heyd 2007;Heyd 2011;Gerling et al. 2012;Horváth et al. 2013;Kaiser & Winger 2015). Herds consisted of cattle, sheep and horse, and the mobile lifestyle within small territories was supported by the use of wagons as mobile homes. Only a few stable settlements are known, but from burial pits, we find extensive use of thick plant mats and felt covers; the same materials were probably in use for dwellings. Light-framework dwellings could easily be assembled and disassembled and transported on pack animals. The economy was based on meat and dairy products, as well as fish (reflected in high 15 N values), and seeds from wild plants were collected and used in soup with meat (Schulting & Richards 2016). No agriculture is documented, but to the west, some cereal cultivation was practised (Pashkevych 2012). Extensive exchange systems linked different groups together and secured access to products outside the pastoral economy, such as metal. The healthy diet meant that life expectancy was fairly high, with many individuals living to 50-60 years old. We can also observe a selection for lactose tolerance and for height (Mathieson et al. 2015). Barrows were aligned in groups forming lines in the landscape to mark seasonal routes. Similar arrangements are found in Northern Europe, suggesting a shared perception and use of landscapes among Single Grave populations from Holland to western Jutland (Hübner 2005;Bourgeois 2013). Secondly, we should observe that Corded Ware Cultures co-existed with late Neolithic cultures for shorter or longer periods across much of Central and Northern Europe. In Denmark, there were late Funnel Beaker communities in the Danish islands (Iversen 2015); in other parts of Northern Europe they were often in close proximity, such as the Globular Amphora Culture in Poland (Szmyt 1999). What we observe, therefore, in the archaeological record is a gradual process of acculturation and integration, which meant that after 2400 BC, the former strict cultural boundaries were gradually dissolved and a new, shared material culture appeared, represented first and foremost in Denmark by flint daggers, and in Central Europe by early Únetiče metal daggers. Bell Beaker groups had by now also emerged on the scene, introducing metallurgy, and they further complicated the mix of cultures and people. In burial rituals, however, old megalithic traditions still had an impact, as seen in a revival of stone cist burial in some regions. It was only on the advent of the Middle Bronze Age that cultural homogenisation prevailed. Thus, it took nearly 1000 years before all regions in Northern and Central Europe had adopted a shared social and cultural outlook that in all probability also included shared languages. With the help of strontium isotopic analysis and ancient DNA we can now reconstruct in some detail the social processes behind the observable archaeological changes. At the Corded Ware cemetery of Eulau, the application of strontium isotopic tracing, ancient DNA and archaeology has allowed a full reconstruction of a singular family massacre and its local background (Haak et al. 2008;Meyer et al. 2009;Muhl et al. 2010). Four multiple burials contained single families of father, mother and children in various combinations, and it could be demonstrated that the mothers were of non-local origin, most probably originating in the Harz Mountains 50-60km north of the settlement. The arrows that had killed the families confirmed this, as they belonged to another Neolithic culture: the Schönfeld, which was located in this area and practised cremation, a burial custom different to that of the Corded Ware Culture (compare Muhl et al. 2010: 44, 125). Contacts between the two are also illustrated by the occurrence of Schönfeld pottery in Corded Ware graves in the Halle-Saale region (Furholt 2003). Other Neolithic groups in the region, such as the Bernburg Culture, practised collective burials of multiple family groups, as demonstrated by genetics, again being distinctively different from the Corded Ware practice of individual burials (Meyer et al. 2012). We observe two things: that Corded Ware males practised exogamy, perhaps marriage by abduction, which provides a possible explanation for the killing. The question is: was this a unique case, or did it reflect a more widespread marriage practice, as suggested by Schönfeld pottery in Corded Ware burials? In a recent work on diet and mobility among Corded Ware cemeteries from southern Germany (Sjögren et al. 2016), it was possible to demonstrate that exogamy was indeed a common practice among Corded Ware groups in this larger region (from a sample of 60). Most adult women (between 28 and 42 per cent) were of non-local origin and had a different diet during childhood. Such evidence fits well with recent genetic information documenting more-varied mtDNA haplogroups among Corded Ware females than among males (Lazaridis et al. 2014). The female diet was more similar to previous Neolithic diets, while in Corded Ware as a whole there is a shift towards higher δ 15 N values, suggestive of a shift in diet and/or in cultivation practices. There may be several different explanations for this shift, such as intense forms of cultivation, higher reliance on freshwater fish, or on animal versus vegetable protein, or a greater reliance on milk and milk products. The latter is supported by a widespread opening-up of landscapes in some regions for grazing animals (Andersen 1993(Andersen , 1995Doppler et al. 2015;Dietre et al. 2016). The analysis comprises 60 individuals and covers the period from the early Corded Ware to the mature and late Corded Ware; in other words, from 2900/2800-2300 BC. Among the burials from the earliest, colonising phase at Tiefbrunn, we find a multiple burial of three individuals: one older male with a hammer-headed pin of steppe type, a young adult male and a female child, around four years old. mtDNA haplogroups were different for all three, indicating that they were not related on the maternal side . Sr isotope ratios suggest that the older man was non-local, while the younger man and the child may have been locals. The skulls of all three individuals exhibited signs of severe trauma and they had probably suffered violent deaths, which again demonstrates that the newcomers were not always welcomed peacefully. There is a similar early burial from the Kujawy region in Poland, of an elderly male of non-local origin, who also had a hammer-headed pin showing steppe influence . Analysis of ancient DNA from the Tiefbrunn multiple burial showed a high percentage of Yamnaya steppe DNA. The larger, consolidated cemeteries from Bergrheinfeld and Lauda-Königshofen are from the middle phase of the Corded Ware Culture (2600-2500 BC), and here the practice of exogamy was well established over a longer period of time. We cannot know where the non-local women were from, but as their diet looked more 'Neolithic', we may assume that they originated in late Neolithic cultures still residing on the higher elevations in the region. Exogamy is a clever, and perhaps necessary, policy if new migrating groups are mainly constituted by males. This is a probable scenario for an expanding pastoral economy, and is supported by archaeological data from the early horizon of the Single Grave/Corded Ware Culture in Jutland, where 90 per cent of all burials belonged to males (Hübner 2005: 632-33, fig. 454). It gains further support from later historical sources from India to the Baltic and Ireland (Falk 1986;Kershaw 2000). They describe, as a typical feature of these societies, the formation of warrior youth bands consisting of boys from 12-13 up to 18-19 years of age, when they were ready to enter the ranks of fully grown warriors. Such youthful warbands were led by a senior male, and they were often named 'Black Youth' or given names of dogs and wolves as part of their initiation rituals. The nature of this institution was recently summarised as follows: In the Indo-European past, the boys first moved into the category of the (armed) youths and then, as members of the war-band of unmarried and landless young men, engaged in predatory wolf-like behaviour on the edges of ordinary society, living off hunting and raiding with their older trainers/models. Then about the age of twenty they entered into the tribe proper as adults (Petrosyan 2011: 345). The activities of the young war-bands were seasonal; during the rest of the year they lived within their households and communities, perhaps engaged in herding animals and other forms of farm labour. Such bands were mainly made up of younger sons, as inheritance was restricted to the oldest son. Thus, they formed a dynamic force that could be employed in pioneer migrations (Sergent 2003). Archaeological evidence of this institution has been documented in the Russian steppe from the Bronze Age onwards (Pike-Tay & Anthony 2016; Brown & Anthony in press). There is additional evidence to support the idea that males dominated the initial Yamnaya migrations and the formation of the early Corded Ware Culture: in burials from the earliest horizon, often with males, as in Tiefbrunn and Kujawy, there was no typical Corded Ware material culture. This was followed shortly afterwards by the deposit of Atype battle-axes in male burials, but there was as yet no pottery (Furholt 2014: 6, fig. 3). Corded Ware pottery appeared later in Northern Europe, and we may suggest that this did not happen until women with ceramic skills married into this culture and started to copy wooden, leather and woven containers in clay. This process began in the early phase both south and north of the Carpathians (Ivanova 2013;Frînculeasa et al. 2015). Some confirmation of such material transformations is found in a uniquely preserved find of the typical flat bowl with short feet made of wood (Muhl et al. 2010: 47), well suited for turning milk into yogurt or similar dairy products overnight. Its pottery version became a shared type throughout the Corded Ware Culture, and later the Bell Beaker Culture. We may also note that pastoral economies historically tend to dominate agrarian economies, as they are both more mobile and more warlike in their behaviour. Such a pattern of economic and social dominance, reflected in taking wives from farming cultures while sending young males in organised war-bands to settle in new territories, would explain both the genetic and linguistic dominance of the Yamnaya steppe migrations, the results of which we can observe to this day. Figure 1 summarises these transformative processes in a model. Language dispersal and the formation of Proto-Germanic in northern Europe These local processes of social integration between intruding Yamnaya/Corded Ware populations and remnant Neolithic populations can be applied to language dispersal. We should expect that the transformation from Proto-Indo-European to Pre-Proto Germanic would reveal the same kind of hybridisation between an earlier Neolithic language of the Funnel Beaker Culture, and the incoming Proto-Indo-European language. This is precisely what recent linguistic research has been able to demonstrate (Kroonen & Iversen in press). In their study on the formation of Proto-Germanic in Northern Europe, Kroonen and Iversen document a bundle of linguistic terms of non-Indo-European origin linked to agriculture that were adopted by Indo-European-speaking groups who were not fully fledged farmers. The most plausible, and perhaps the only possible, context for this to have happened would be the introduction of Proto-Germanic by the intruding Yamnaya groups. Archaeologically, this adoption can be understood from their interaction over several hundred years with late Funnel Beaker groups still residing in eastern Jutland and on the Danish islands, where they maintained a largely agricultural economy. From this we can conclude that terms linked to farming, and the cultivation of many important crops, were missing among the early Yamnaya/Corded Ware groups, who may well have acquired cereals (barley) mainly for the purpose of producing and consuming beer (Klassen 2005). In addition, we learn that the Neolithic language of the Funnel Beaker Culture was in all probability non-Indo-European. This process of language interaction is illustrated by the model in Figure 2. It illustrates that different Indo-European language branches were in contact with one and the same Neolithic tongue throughout Europe. The new data conforms well to the reconstructed lexicon of Proto-Indo-European (Mallory & Adams 2006), which provides important clues that the subsistence strategy of early Indo-European-speaking societies was based on animal husbandry. It includes, for instance, terms related to dairy and wool production, horse breeding and wagon technology. Words for crops and land cultivation, however, have proved to be far more difficult to reconstruct. These results from historical linguistics are supported by similar evidence from archaeology (Andersen 1995;Kristiansen 2007). With the recent study by Kroonen and Iversen (in press), we can now demonstrate how social and economic interaction with existing Neolithic societies also had a corresponding linguistic imprint. This should not surprise us, as similar results are well documented from the interaction of Yamnaya societies with their northern Uralic-speaking neighbours (Parpola & Koskallio 2007). From this we may conclude that Funnel Beaker societies spoke a non-Indo-European language, and thus another pillar in support of the Anatolian hypothesis of farming/language dispersal (Renfrew 1987) has fallen. When the whole complex of wagon terminology is taken into account, i.e. 'wheel', 'axle', 'nave', 'thill', 'yoke', 'hame' (Anthony & Ringe 2015: tab. 1, fig. 1), the idea that all of those terms arose independently in the daughter languages seems extremely unlikely. When we add the evidence from ancient DNA, and the additional evidence from recent linguistic work discussed above, the Anatolian hypothesis must be considered largely falsified. Those Indo-European languages that later came to dominate in western Eurasia were those originating in the migrations from the Russian steppe during the third millennium BC. Conclusion We have been able to reconstruct the social processes of cultural integration and hybridisation that followed from (probable) Neolithic women marrying into Yamnaya settlements dominated by males of first-generation migrants. This practice continued over several generations, and the women soon started to produce new pottery versions of existing containers made of organic materials, with some further innovations. The original herding economy of the Yamnaya migrants gradually gave way to new agrarian practices of crop cultivation, which led to the adaptation of new words. The result of this hybridisation process was the formation of a new material culture, the Corded Ware Culture, and of a new dialect, Proto-Germanic (or perhaps more correctly, Pre-Proto-Germanic). The latter was likewise an adaptation to new conditions, with the borrowing of novel terms from neighbouring Neolithic communities and from women who had married in to the migrant communities. Archaeology here provides a socio-linguistic setting for a process of language change over several hundred years between 2800 and 2400 BC. This integrated model of cultural, linguistic and genetic change explains the formation of Corded Ware Cultures as a result of local adaptations and of interaction between migrant Yamnaya populations and indigenous Neolithic cultures. The social institution of exogamy provided an integrating mechanism, despite sometimes hostile relations between intruding Corded Ware groups and residing Neolithic groups; the burials at Eulau are the most prominent example of this. Burial rituals also reveal a major difference in property relations and thus social organisation between existing Neolithic groups and intruding Yamnaya/Corded Ware groups. Both Yamnaya and Corded Ware groups shared individual burials under small family mounds, reflecting the transmission among individual families of animals and other property between generations. In contrast to this, the collective, megalithic or similar type burials of Neolithic groups reflected collective, clan-like shared ownership of property, animals and land. This collision of ideologies was played out gradually, with the Corded Ware political economy and interlinked cosmology as the winner once we enter the Bronze Age. Some influences from the Neolithic past, however, remained in both language and social organisation. This new historical interpretation rests on relatively solid ground, and represents a return to a more dramatic past than the prevailing model of cultural and technological transmissions. Some may not like it for its resemblance to an older paradigm of migrations as a primary cause of cultural change, as represented by Gustav Kossinna and Gordon Childe (Kristiansen 1998: 7-24), but we are now in a position to unravel the complexities behind the historical processes in much detail, and thus avoid the simplistic models of the past. Through this we realise that peaceful interaction and intermarriage between culturally and genetically different groups formed the day-to-day foundations of social life, interspersed with episodes of conflict. In the long term, however, the Corded Ware social formation had the potential to dominate, not least when supported by the migrations of related Bell Beaker groups. Together, their social and demographic force would finally create the foundations for the rise of the Bronze Age. We are only beginning to understand these processes, however, and much new evidence can be expected that will add detail and refine our models, while retaining the big picture.
5,908
2017-04-01T00:00:00.000
[ "History", "Linguistics" ]
Solid-state coexistence of {Zr 12 } and {Zr 6 } zirconium oxocarboxylate clusters † Ligand metathesis, Co( II ) coordination, and partial condensation reactions of an archetypal {Zr 6 } zirconium oxocarboxylate cluster result in the first example of the coexistence of the distinct zirconium oxide frameworks {Zr 6 O 8 } and {Zr 12 O 22 }. Even minor modifications to the reaction conditions push this apparent equilibrium towards the {Zr 6 O 8 }-based product Molecular high-nuclearity zirconium(IV) and cerium(III/IV) oxide structures have emerged as an extension to classical polyoxometalate chemistry with numerous structural analogies and similar formation paradigms. 1 In this context, we have explored the magnetic functionalization of several zirconium(IV) oxide clusters based on highly condensed {Zr 6 O 8 } 2 and {Zr 12 O 22 } 3 frameworks.Whereas little is known about the equilibria of different structural zirconium oxide archetypes in their reaction solutions, their similarity to group-5/6 polyoxometalates, where numerous species or isomers commonly coexist, suggests that, under certain boundary reaction conditions, different zirconium oxide structures could coexist as well.We now scanned the reaction parameter space for Co(II)-substituted zirconium oxocarboxylate clusters, starting from a member of the well-established {Zr 6 O 4 (OH) 4 (carboxylate) 12 } 2type cluster family of clusters, namely [Zr 6 O 4 (OH) 4 (pr) 12 ] (pr: propionate).Reactions were performed in the presence of potential bridging ligands in acetonitrile and small, welldefined amounts of water that facilitate further hydrolysis or condensation steps.‡ This approach yielded a crystalline phase of composition {H[Zr 12 Co 2 O 8 (OH) 14 (pr) 22 (MeCN) 2 (μ-pz)] [Zr 6 Co 6 O 8 (pr) 12 (Hbda) 6 ](NO 3 ) 3 •8MeCN} n (1), in which clusters of both the {Co 6 Zr 6 } and the {Co 2 Zr 12 } types co-crystallize (H 2 bda: N-butyldiethanolamine).Here, pyrazine (pz) selectively coordinates to the Co centers of the {Co 2 Zr 12 } moieties and interlinks the latter into a one-dimensional coordination polymer; we note that while a few zirconium oxide-based MOFs characterized by single-crystal X-ray diffraction exist, 4 1 represents the first structurally characterized heterometallic zirconium oxide-based coordination polymer.Importantly, replacing the pz linker with an alternative potential linker reagent, 4,4′-biphenol (H 2 bph), under otherwise comparable reaction conditions results in the formation of crystals of only the discrete {Co 6 Zr 6 }-type cluster, isolated as [Co 6 Zr 6 O 8 (pr) 12 (Hbda) 6 ] (Hbph) 2 •(H 2 bph) (2). The two (crystallographically equivalent) Co(II) sites each coordinate to two oxygen atoms of two propionate ligands, and a pyrazine ligand coordinates to two adjacent Co sites of neighboring {Co 2 Zr 12 } groups, resulting in an infinite helical chain (Fig. S1 † Conclusions The close similarity between the reaction conditions yielding compounds 1 and 2 highlights the sensitive nature of the equilibrium conditions under which different {Zr 6 } and {Zr 12 } zirconium oxide cluster archetypes can co-crystallize.We note that we, however, cannot rule out the co-existence of these cluster types in the reaction solution that eventually produces crystals of 2: whereas the {Zr 12 } units in 1 are interlinked by pyrazine into a 1D coordination polymer, effectively reducing their solubility and facilitating precipitation, the 4,4′-biphenol linker employed in the reaction solution for compound 2 might simply be less affine to Co(II) coordination so as to adopt the same role.More systematic in situ studies of the reaction solutions are currently underway to determine how far the coexistence relations seen in the solid state relate to the conditions in the solution.Full-sphere data collection with exposures of 30 s per frame was made with ω scans in the range of 0-180°at φ = 0, 120, and 240°.A semiempirical absorption correction was based on the fit of a spherical harmonic function to the empirical transmission surface as sampled by multiple equivalent measurements 6 using SADABS software. 7Measurements were optimized to collect data to a resolution of 0.71 Å; however, the datasets have been truncated to obtain the statistically relevant resolution.The positions of metal atoms were found by direct methods.The remaining atoms were located in an alternating series of least-squares cycles and difference Fourier maps.All non-hydrogen atoms were refined in full-matrix anisotropic approximation.All hydrogen atoms were placed in the structure factor calculation at idealized positions and were allowed to ride on the neighboring atoms with relative isotropic displacement coefficients. Notes and references Fig. 1 A segment of the solid-state structure of 1 illustrating the coexistence of one helical polymer strand A, consisting of three pyrazine-linked {Co 2 Zr 12 } groups, and two adjacent {Co 6 Zr 6 } (B) clusters.The zirconium oxide substructures are shown in an octahedral representation.Co: purple spheres, C: grey, N: blue, O: red.Nitrate groups, acetonitrile solvate molecules, and hydrogen positions are omitted for clarity. Fig. 3 Fig. 3 Molecular structure of B. Color codes as in Fig. 2. The {Co 6 Zr 6 } cluster in is virtually isostructural to B. Hydrogen and nBu groups belonging to bda ligands are omitted for clarity. Fig. 4 Fig. 4 Arrangement of the three nitrate groups (shown in their van der Waals spheres) around the {Zr 12 } core (in a polyhedral representation) in 1.
1,216.6
2014-01-01T00:00:00.000
[ "Materials Science", "Chemistry" ]
Feedback Linearization and Sliding Mode Control for VIENNA Rectifier Based on Differential Geometry Theory Aiming at the nonlinear characteristics of VIENNA rectifier and using differential geometry theory, a dual closed-loop control strategy is proposed, that is, outer voltage loop using sliding mode control strategy and inner current loop using feedback linearization control strategy. On the basis of establishing the nonlinear mathematical model of VIENNA rectifier in d-q synchronous rotating coordinate system, an affine nonlinear model of VIENNA rectifier is established. The theory of feedback linearization is utilized to linearize the inner current loop so as to realize the d-q axis variable decoupling. The control law of outer voltage loop is deduced by utilizing sliding mode control and index reaching law. In order to verify the feasibility of the proposed control strategy, simulation model is built in simulation platform of Matlab/Simulink. Simulation results verify the validity of the proposed control strategy, and the controller has a strong robustness in the case of parameter variations or load disturbances. Introduction With the development of power electronic technology, threelevel pulse width modulation (PWM) rectifiers are widely used in high or medium power converters because of their excellent performance: low switch voltage stress, low input current harmonic distortion, high efficiency, input power factor is unit, and so on [1][2][3][4][5][6][7].Three-phase/switch/level VIENNA rectifier (abbreviated as VIENNA rectifier) is one of the best three-level rectifier, which is proposed in 1994 by Kolar and Zach.Compared with traditional three-level rectifier, such as diode clamping three-level rectifier, VIENNA rectifier has lots advantages, such as small number of power switch tube, simple control circuit, and low design costs, without output voltage bridge arm shoot-through problems.So, more and more scholars and engineers focus their attention on the study of VIENNA rectifier and its control strategy [3][4][5][6][7][8][9][10][11][12][13][14]. Since VIENNA rectifier is a typical strong coupling nonlinear system, it leads to difficulty in designing the controller.In [8,9], the mathematical models of large and small signals topology are analyzed in detail, and the controller is designed by using proportion and integral (PI) algorithm.A control method of input/output accurate linearization is proposed in [10].State-space average model is established, and PI control algorithm is used in the outer voltage loop; hysteresis control is used in the inner current loop in [11][12][13].The control methods described above improved the performance of VIENNA rectifier to some extent.However, there are some disadvantages, such as system excessive dependence on the accurate mathematical model, inconvenience of parameter setting, complicated of control algorithm, and poor dynamic.In order to overcome the above drawbacks, this paper proposes a control strategy which combines feedback linearization control and sliding mode control. In recent decades, nonlinear control theory has made great progress, especially feedback linearization theory based on differential geometry.In this method, nonlinear system can achieve status or input/output linearization by using a certain nonlinear state transformation or feedback transformation.Feedback linearization control has been applied to three-phase voltage PWM rectifier [14][15][16], which is a multivariable and strong coupling nonlinear system, and achieved well control effect.All these methods can solve 2 Mathematical Problems in Engineering the problem of decoupling for original nonlinear system and obviously improve static and dynamic performance for three phase rectifier.Yet, this control method depends on an accurate mathematical model and is sensitive to system parameters.Sliding mode control is different from feedback linearization control method.Sliding mode control shows great robustness and stays out of parameter changes when the system is running in the sliding surface.In [17][18][19][20], sliding mode control has been applied to the three-phase PWM rectifier and achieved a good result. The three-phase PWM rectifier is a two-level rectifier, while the VIENNA rectifier is a three-level rectifier.Although their structure is not the same, there are some similarity in control strategy.Learning from the applications of feedback linearization control and sliding mode control for threephase PWM rectifier, this paper integrates feedback linearization control method and sliding mode control method and eventually establishes a new type of VIENNA rectifier nonlinear control system.That is, sliding mode control is used in the outer voltage loop and state feedback linearization control is used in the inner current loop.At the same time, space vector pulse width modulation (SVPWM) technology is introduced to modulate the output signal of inner currentloop [21].In order to reduce the chattering phenomenon produced by sliding mode control, index reaching law is adopted to improve the whole approaching process.In order to verify the correctness and superiority of the proposed control strategy, numerical simulation is done. Physical and Modeling Considerations In this section, the physical system and the mathematical model of VIENNA rectifier are presented.The main circuit of VIENNA rectifier and its simplified model are shown in Figures 1(a) and 1(b), respectively.The main circuit includes six fast-recovery diodes (D 1 -D 6 ), three boost inductors, three bidirection power switching tubes ( , , and as shown in the dashed box), and two groups of output capacitances.Among them, , , are the AC input power of VIENNA rectifier; 1 and 2 are DC side output voltage filter capacitor, and their voltage across, respectively, are 1 , 2 ; is load resistance and the voltage across is , and is the output current; is the boosting inductor and is defined as an equivalent resistance of inductor.In order to simplify the system, all the power-switching devices are seen as ideal and switching frequency is much higher than the grid frequency. The mathematical model of VIENNA rectifier in abc coordinates can be expressed as follows: where With Park's transformation, the mathematical model of VIENNA rectifier in d-q coordinates system is given as follows: where and are the grid voltage variable in d-q coordinate system; and are the grid current variable in d-q coordinate system.As for , , , and , they are the switching function in d-q coordinate system for ( = , , ).The equivalent circuit of VIENNA rectifier in d-q coordinate system is shown in Figure 2. For three-phase balance power grid, because of Thus, ( 3) and ( 4) can be rewriten as follows: With ( 5) plus ( 6), we can obtain Control Goal and Control Strategies The main control goal for VIENNA rectifier is to make sure that input current waveform is sinusoid and track the input voltage waveform, the power factor is unity, and DC side output voltage stabilized at the given reference voltage.Equations ( 8)- (9) show that VIENNA rectifier is a strong decoupling nonlinear system.In order to realize the variables decoupling in d-q axis, ensure that input current is sinusoid and tracks input voltage; the feedback linearization technology is used in the inner current loop.At the same time, sliding mode control based on index reaching law is used in the voltage loop to stabilize the output voltage and provide the reference directive current * for inner current loop. Inner Current Loop Controller Design Based on Feedback Linearization.The role of inner current loop is making and keep track of reference directive current * and * , respectively, and realizes system unity power factor operation. Affine Nonlinear Models of VIENNA Rectifier. Select state variables as Select input variables as Select output variables as According to (8), we get the affine nonlinear equation for two-input and two-output system as follows: where For the system which is described in (13), it is nonlinear to state variable () while linear to controlled variable . Taking Lie derivative for (13), we can obtain where 3.1.3.Coordinate Transformation and the Control Law.Select output vector as According to the definition of relative degree [22], the derivative of two input/output system output functions is where In the meanwhile, since the matrix () is invertible, the state feedback control law is given by In order to track the desired value, a new control law is given by [14] V 1 = − 10 ( * 1 − 1 ) From ( 20), (22), and ( 23), the new state feedback control law of the original nonlinear inner current loop is obtained: So far, the original nonlinear system is linearization.Meanwhile, the control for inner current loop can be achieved by setting feedback coefficients 10 and 20 . Analysis of System Dynamic Stability. Because of the relative degree of (8) being 1 and according to the literature [14], we can get the error dynamic closed-loop system as follows: where = [ ẏ Lemma 2. For internal dynamic formula (27), when tracking error vanishes, that is it is called the zero dynamics [14].And if a nonlinear system's zero dynamics is asymptotically stable, it is said to be minimum phase. Because the power supply is three-phase symmetrical voltage and the rectifier is operated with unity power factor and considered in the steady state, we have = , = * , = * and = 0. Therefore, ( 8) and ( 9) can be simplified as follows: where , , * , and represent the steady-state values of the switching functions and , -axis reference current * , and output load current , respectively. By contrast, we find that (29) are the same as ( 21)-( 23) in [14], so the dynamic analysis of VIENNA rectifier can learn from the literature [14]. Transform the original nonlinear system (13) into the linear form of ( 25)-( 27) with = 3 = .The internal dynamics (27) becomes According to the energy balance and because of the tracking error vector approaching zero, that is, → * and → 0, we can get the following zero dynamics equation: From (31), we can know the following.For a positive initial side output voltage, the steady-state value of 3 will eventually equal the desired value of * .So, when using cascaded current mode, the control system is a minimum phase system, and the dynamic is stability.Therefore, we can design a superior performance control system. Voltage Loop Controller Design Based on Sliding Mode Control.The role of voltage loop controller is to guarantee output voltage tracking the given reference voltage * and stable.In the meantime, voltage loop controller provides -axis reference current * for the current loop.For (24), and can be acquired by taking Park's transformation to input currents , , and .For dual closed loop control system which adopts inner current loop and outer voltage loop, * is always provided by the outer loop.For three-level PWM rectifier, PI algorithm [23], fractional control algorithm [24], and sliding mode control algorithm [25] are used in the outer voltage loop.Since inner current loop uses feedback linearization control, it brings a shortcoming for this control system; that is, the control system is overreliance on accurate mathematical model.In order to compensate this shortcoming, sliding mode control algorithm is used in the outer voltage loop.One of the biggest advantages for sliding mode control is insensitive to the change of system parameters and having less demand for control system model.In order to avoid chattering, improve the approaching performance, index reaching law is introduced based on sliding mode control which is described in [25]. Let us assume the error between dc output voltage and the given reference voltage as follows: Based on the principle of sliding mode control [26,27], the sliding surfaces can be defined as When = 0, the system runs in sliding mode surface.Take a derivate on both sides of (33), and because * is constant, yield According to ( 9), (34) can be rewritten as In order to guarantee the system having a good quality in the transition process, especially improving the quality of arrival stage, and eliminating the chattering phenomenon, index reaching law is applied [26]: According to the literature [26], yield From (37), we can obtain From (38), it shows that reducing and increasing can accelerate the approaching process. From ( 35) and (36), we can obtain Further, (39) can be transformed as follows: According to the above assumption, power grid is symmetrical three-phase voltage.When the system is in steady state, yield = 0, / = 0, = √ 3 RMS , = 0, = * .At the same time, the processing speed of inner loop is faster than the outer loop.Thus, can be seen as constant; yield / = 0. From (8), the following approximation algorithm can be obtained: From ( 40) and (41), we can obtain When the system is in steady state, = * .So, (42) is rewritten as As a result, the output of outer voltage loop just is the current instructions * for the inner loop, which is relevant to the output voltage, output current, valid value of phase voltage, -axis current , and so on.Besides, * is irrelevant to the switching function variables. Control System Block Diagram.According to the above analysis, we can get the control system block diagram as shown in Figure 3. Simulation Results In order to verify the correctness and superiority of the proposed control method, the simulation model is built in the simulation platform of Matlab/Simulink.The main simulation parameters are given in Table 1.SVPWM algorithm based on two-level space vector is used in modulation methods of VIENNA rectifier.Small positive vector is set as the first vector, by judging the direction of phase load current which is connected to the output neutral point and adjusting relative action time according to the imbalance between small positive and negative vectors.Voltage regulating factor (0 < < 1) is introduced.The adjustment to vector action time is done so as to realize the midpoint potential balance control. System Startup Responses.When the system is started, DC side output voltage rapidly rises from 0 V and stabilizes at reference voltage * approximately at 0.0035 s, the response waveforms of output voltage are shown in Figure 4.At the same time, it shows that the system has a rapid response, without overshoot and static error.Also, it verifies that using sliding mode control method could force the system running path moving fast to sliding surfaces, thereby accelerating the convergence process of the system. Transient Responses to Step Changes in Load.When load suddenly changes, the simulation waveforms are shown in Figure 6.The value of load resistance is changed from 100 Ω to 200 Ω at 0.05 s.As shown in Figure 6(a), DC side output voltage rises to about 6 V instantaneously and then reverts to a stable value (400 V) after 0.001 s. Figure 6(b) shows that input current also suddenly changes.However, input current waveform can correctly track the input voltage waveform and maintain sinusoid.Figure 6(c) shows that active current can correctly track the given active current reference * .At the same time, the system response time is short.In a word, it shows significant anti-interference capability to external disturbance by using the proposed strategy. Transient Responses to Step Changes in the Given Output Reference Voltage.Assuming output voltage value instantaneously declines to 350 V at 0.06 s, the simulation waveforms are shown in Figure 7. From Figure 7(a), output voltage begins to decrease at 0.06 s and stabilized at 350 V after 0.003 s.From Figure 7(b), -phase input current value is 0 at 0.06 s and lasts about 0.003 s.Since then, it starts to increase rapidly and tracks input voltage and remains sinusoid. Conclusions In this paper, a dual closed loop control method, that is, outer voltage loop based on sliding mode control and inner current loop based on feedback linearization, is proposed.Simulation results show that the proposed control strategy has a good control effect.The main contributions of this paper are as follows. (i) It presents feedback linearization control strategy for VIENNA rectifier inner current loop such that it solves the linearization problems and realizes - axis variable decoupling. (ii) It presents sliding mode control strategy for VIENNA rectifier outer voltage loop and introduces the index reaching law such that it solves stability of output voltage, system startup response, and dynamic characteristics. (iii) The combination control strategy overcomes the disadvantage that the system is overreliance on the accurate mathematical models by using feedback linearization control strategy.Meanwhile, the proposed control strategy greatly improves the robustness of the system. Figure 1 : Figure 1: Main circuit of VIENNA rectifier and its simplified model. 4. 2 . System Steady-State Characteristics.When VIENNA rectifier is operating in steady state, input current waveform tracks input voltage waveform very well and shows sinusoid, as shown in Figure5(a); the active current can well track reference current * given by outer voltage loop, as shown in Figure5(b); the reactive current is 0, which means the rectifier operates with unit power factor, as shown in Figure5(c).The simulation waveforms indicate that the VIENNA rectifier reaches predetermined target. Figure 4 : Figure 4: Response waveforms of output voltage when system starts up. Figure 6 : Figure 6: Transient responses to step changes in load. Figure 7 : Figure 7: Transient responses to step changes in the given output reference voltage.
3,865.4
2015-03-23T00:00:00.000
[ "Engineering", "Physics" ]
Local random potentials of high differentiability to model the Landscape We generate random functions locally via a novel generalization of Dyson Brownian motion, such that the functions are in a desired differentiability class, while ensuring that the Hessian is a member of the Gaussian orthogonal ensemble (other ensembles might be chosen if desired). Potentials in such higher differentiability classes are required/desirable to model string theoretical landscapes, for instance to compute cosmological perturbations (e.g., smooth first and second derivatives for the power-spectrum) or to search for minima (e.g., suitable de Sitter vacua for our universe). Since potentials are created locally, numerical studies become feasible even if the dimension of field space is large (D ~ 100). In addition to the theoretical prescription, we provide some numerical examples to highlight properties of such potentials; concrete cosmological applications will be discussed in companion publications. Introduction Random functions have many application in physics and mathematics, one of the best known ones is their use to describe disordered systems in solid state physics leading to Anderson localization (often Gaussian random potentials based on a truncated Fourier series are used). In this paper we derive, to our knowledge, new methods to generate random functions of high differentiability locally, while retaining a Hessian in the Gaussian orthogonal ensemble. Our motivation stems from our desire to study cosmological implications of certain landscapes in string theory, but we tried to make our results accessible to a wider audience by delegating cosmological applications to a separate publication. Readers not familiar with cosmology or string theory may simply skip the motivational paragraphs. As alluded to, a recent application that motivated this work is the use of random functions to model certain landscapes in string theory [1]. For example, the Denev-Douglas landscape [2] was modelled by a random potential in [3,4] ("Random Supergravities"), see also [5]. Since a top down approach yielding the full potential is virtually impossible in all but the simplest cases, random potentials are used to conduct numerical experiments and search for suitable vacua [5,6], investigate the feasibility of inflation [7][8][9][10] or compute (distributions of) observables 1 [21][22][23][24][25][26][27], see also [28][29][30][31][32] for related work (for recent reviews of inflation see [33,34] and for model building in string theory see [35]). Naturally, the closer random potentials model the actual landscape of interest, the more reliable predictions become. Thus, our interest is to prescribe certain generic properties, such as the overall hilliness, the properties of the Hessian at well separated points etc., whenever a random potential is generated. For example, in random supergravities, the Hessian is a mix of Wishert and Wigner matrices [3]. A complementary analytic tool is random matrix theory (see [36,37] for a textbook introduction) which can often be used in conjunction with numerical experiments due to the feature of universality [38][39][40][41][42]. Most work in recent years relied on potentials constructed globally via truncated Fourier series [7, 8, 25-27, 43, 44], a subclass of which are Gaussian random potentials. However, this approach has the disadvantage of being computationally intensive as the dimensionality D of field space increases, since the number of random parameters increases as # D . Thus, for a description of the hundreds of fields on generic string theoretical landscapes [1], this approach becomes useless. Fortunately, many questions of interest, for instance the likelihood of inflation, the probability of encountering a minimum or the values of observables such as the scalar spectral index (the slope of the observed power-spectrum), only require knowledge of the scalar fields' potential (the landscape) in the vicinity of the trajectory taken by the fields. This motivated Marsch et al. [45] to construct the potential locally by employing Dyson Brownian motion [46] (DBM), greatly reducing the cost of numerical experiments (∝ D # ) to the point where hundreds of fields can be treated on a notebook with Mathematica. To generate a potential via DBM, one stitches together patches wherein the potential is given by a Taylor series, truncated at second order. After a prescribed step, for instance a set distance along an inflationary trajectory, random components are added to the Hessian. In DBM, these random components conform to a Gaussian distribution with prescribed mean and variance, so that the Hessian is a member of the Gaussian orthogonal ensemble. While efficient, this procedure has a serious drawback: after each step the eigenvalues of the Hessian, i.e. the masses of fields, jump. Such potentials are ill suited to study cosmological perturbations, since artefacts arise in correlation functions. For example, even a single jump in the mass of one of the fields causes a prominent ringing pattern in the power-spectrum [47][48][49] (i.e. the two point correlation function). Higher order correlation functions, commonly lumped together under the name non-Gaussianities, are affected even more. Such jumps in the Hessian can also hinder the search for minima: whenever a minimum is approached, the Hessian starts to dominate the evolution; due to the random jumps, the trajectory is bounced around preventing a smooth approach to the minimum. To a lesser degree the steps can also reduce the probability of finding regions that are sufficiently flat for inflation. It is possible to reduce these artefacts by decreasing the step size; however, this brute force approach reduces the computational advantage of DBM. Thus, in this paper, we generalize the method by Dyson to generate potentials in any desired differentiability class: we delegate perturbations not to the Hessian, but to higher derivative tensors, while retaining a Hessian in the Gaussian orthogonal ensemble (other distributions may be chosen if desired); for example, if V ∈ C 2 is desired, random Gaussian perturbations are added to the tensor of third derivatives ∂ 3 V /(∂φ a ∂φ b ∂φ c ). After a brief review of the method to generate potentials V ∈ C 1 via DBM in Sec. 2, we provide two distinct methods to generate V ∈ C 2 , both of which yield the same statistical properties of the Hessian. Additional freedom is present, since the number of random variables exceed the number of conditions stemming from the prescribed statistical properties of the Hessian. The first method provides potentials that are "smoothest" in the sense that a maximal number of random variables is set to zero, Sec. 3.3.1. The second methods adds perturbations primarily in the directions set by the eigenvalues of the Hessian, Sec. 3.4.1. We compare both methods in Sec. 5 and find them to be qualitatively indistinguishable and free of artefacts. We plan to use these potentials for cosmological applications in a forthcoming publication, where we also intend to test how sensitive observables are to the chosen method -naturally, any dependence would dramatically reduce the predictiveness and thus reduce the usefulness of such random potentials to model concrete landscapes in string theory. However, if observables are independent of the methods, we have the opportunity to compute generic predictions for wholes classes of string theoretical landscapes, as opposed to investigating inflationary models on a case by case basis. While potentials V ∈ C 2 are sufficient to compute the power-spectrum, they should not be used for higher order correlations functions. For example, if one wants to compute the bi-spectrum, one needs to be able take three derivatives of the potential, i.e. V ∈ C 3 is needed. We therefore provide a generalization of the first method to create potentials in any desired differentiability class (V ∈ C k with k ∈ N) in Sec. 4. For k = 3, we find that spurious oscillations arise in the evolution of the Hessian's eigenvalues, Sec. 5. These oscillations are caused by truncating the Taylor series at higher order and can't be avoided within the current framework; nevertheless, their amplitude, and thus their effect on observables, can be reduced to any desired level by decreasing the step length. While not an ideal solution, such potentials still improve on potentials of lower differentiability (one can't compute the bi-spectrum at all if V ∈ C 1 ). We leave future applications of such potentials as well as improvements to the methods put forth in this article to future work. We would like to reiterate that the methods introduced in this study are independent of the applications outlined above. 2 Creating random potentials along a trajectory Motivation and goals Inflationary observables depend only on properties of the potential in the vicinity of the trajectory, which motivated Marsh et.al. [45] to develop a computationally economical approach to generate random potentials locally by defining random functions around a path Γ in field space 2 : for any Γ, given the value of the potential V , gradient V and Hessian V ≡ H at a point p 0 , the values of the potential and the gradient vector at a nearby point p 1 can be obtained to leading order by means of a Taylor expansion. To construct a random potential, the Taylor expansion is truncated and the Hessian matrix at p 1 is altered by adding a random matrix δH to the Hessian at p 0 , H(p 1 ) = H(p 0 ) + δH . (2.1) By repeating this process along the path Γ, a continuously differentiable, random potential, i.e. V ∈ C 1 , can be obtained. The distribution of the Hessian matrix at well-separated points (i.e. separated by several units of a characteristic correlation length Λ h ) can be restricted to any desired distribution; if Wigner's Gaussian Orthogonal Ensemble (GOE) is chosen, as in [45], the elements of the Hessian undergo Dyson Brownian motion (DBM) [46]. As a consequence of statistical rotational invariance, Hessian matrices associated with well-separated points constitute a random sample of the statistical ensemble, which is invariant under orthogonal transformations. Further, if the field space is Ddimensional, the D(D + 1)/2 entries of the Hessian matrix H are statistically independent 3 . While the choice of the GOE is the simplest one, it is by no means unique; in concrete applications, e.g. to construct potentials obeying prescribed properties of a landscape, the rules of constructing the potential have to be adjusted accordingly. We review Dyson Browning motion in more detail in the next section, before generalizing the prescription. We are particularly interested in two aspects: 1. Generate potentials V ∈ C k with k ≥ 2 (Sec. 3 and Sec. 4), which is needed to compute correlations functions (e.g., k = 2 for the power-spectrum, k = 3 for the bi-spectrum etc.) if artefacts are to be avoided. In addition, V ∈ C 2 is desirable for searches of extrema (if a critical point is approached, the jumps in the Hessian in ordinary DBM hinders a localisation/identification of extrema). 2. Incorporate a soft upper and lower bound on the values of the potential, as in [8]; such a bound is necessary if the potential is used to model a low energy, effective potential. In this article, we focus on the first point, the generation of random functions V ∈ C k , which provides the foundation for concrete applications, such as the computation of cosmological perturbations, or further refinements, such as the incorporation of bounds mentioned in point 2. The latter topics are the subject of companion publications (in preparation). Review: Dyson Brownian motion potentials Dyson Brownian Motion is a canonical -but not unique -choice of rules to govern the stochastic evolution of the Hessian matrix that gives rise to independent GOE Wigner matrices at wellseparated points. To this end, the Hessian needs to be perturbed according to (see [45,46], whose results we summarize here) where δA ab are D(D + 1)/2 zero-mean stochastic variables and the term ∝ −H ab is the uniquely determined restoring force ensuring that the distribution of the entries of the Hessian remains finite and obeys the GOE. This restoring force does not imply the boundedness of the potential. The variable s represents the field space path length along the trajectory Γ and δs is the length of an individual step along Γ. Λ h can be interpreted as a horizontal correlation length. To achieve Dyson Brownian motion of H ab , the first two moments of δH ab need to satisfy [45] δH ab where σ represents the standard deviation of the corresponding Wigner ensemble. To implement the above prescription, consider a D-dimensional field space with fields φ a , a = 1 . . . D and a potential V . We would like to model V as a random one given a suitable starting position. The potential in the vicinity of the starting point p 0 can be expanded up to second order as where Λ v (mass dimension one) sets the vertical scale, andφ a ≡ φ a /Λ h are rescaled, dimensionless fields. The normalization factor √ D is introduced to simplify subsequent expressions. We use Einstein's summation convention over field indices and consider a flat field space metric if not stated otherwise. If we take the truncated Taylor expansion in (2.5) at p 0 , the potential at an adjacent point p 1 close to p 0 , i.e. with local coordinates δφ a satisfying δφ a 1 where . . . is the Cartesian norm, can be written as To generate a random potential, one can truncate the series expansion at second order and set where δv ab | p 0 are taken to be elements of a random matrix 4 . Repeated application over successive points p n along Γ results in a random, piecewise patched together potential V ∈ C 1 . Dyson Brownian motion random potentials are therefore defined by imposing for the mean and second moment of the added random components δv ab . Since O(v ab ) = 1/ √ D, the magnitude of a typical eigenvalue of v ab is of order one. Thus, v 0 and v a both receive contributions of order 1/ √ D over a correlation length. Hence, for potentials uncorrelated over distances which in turn explains the overall normalization factor of √ D in (2.5). Eigenvalue relaxation The probability distribution of a matrix' eigenvalues in the unfluctuated GOE (i.e. the stationary distribution) is given by Wiegners semi-circle law. If a Matrix is initialized in a fluctuated state, for example such that the smallest negative eigenvalue is close to zero, its eigenvalues relax to the stationary distribution as Dyson Brownian motion proceeds. The relevant correlation length is given by Λ h . As a consequence, it is unlikely that a shallow patch remains flat for s = ∆φ Λ h 5 . To be concrete, one can show [45] that the expectation value of the smallest eigenvalue λ min of a matrix H undergoing DBM satisfies where H 0 is the initial matrix at s = 0 and q ≡ exp(−s/Λ h ). Here, a normalization such that eigenvalues lie between −2 and 2 was chosen. Due to the exponential suppression in q, the initial eigenvalue λ min [H 0 ] is forgotten after a few Λ h are traversed. We demonstrate this eigenvalue relaxation and recover the results of Marsh et al. [45] in Fig. 1, where we plot a random potential V ∈ C 1 created via Dyson Brownian motion (using(2.5) and perturbations of the form (2.10) and (2.11)) next to the corresponding eigenvalues of the Hessian. Evidently, eigenvalues relax to the stationary distribution after about Λ h /δs ∼ 100 iterations or steps. Here and in all subsequent figures we take 1M P = 0.1, δs = Λ h /100 and σ 2 = 2/D if not stated otherwise. While the choices of Λ h and Λ v have implications for concrete applications 6 , they merely correspond to a rescaling here. Thus, without loss of generality, we can keep them fixed. Further, as long as the step length is small enough (δs Λ h has to hold) it should not have any impact on applications; how small it has to be depends on the type of application. We come back to this point in Sec. 5. In the meantime, δs = Λ h /100 is small enough for our purposes, while big enough to enable fast computations. σ controls the strength of the perturbations and thus affects the overall hilliness of V . We chose σ 2 = 2/D in line with [45], to enable direct comparisons. To test our code, we recovered numerically the results of [45], which we omit here for reasons of brevity. The path Γ we choose is given by following the slope of the potential. In a cosmological setting, one is commonly interested in solving the field equations in conjunction with the Friedmann equations, but following the steepest descent is faster and suffices for our purposes 7 . While the observation of eigenvalue relaxation does not serve as a strong test for the ensemble to be the GOE (the same results hold true for other symmetric distributions with zero mean and finite variance for off diagonal elements), it provides a simple, necessary consistency check. In the following, we use such plots as benchmarks: as we construct potentials in a higher differentiability class, which is achieved by perturbing a higher order derivative tensor instead of the Hessian while retaining the statistical properties of the Hessian, plots such as the ones above should remain qualitatively unchanged. 3 Extending Dyson Brownian motion to generate random potentials V ∈ C 2 The perturbations of the Hessian in the aforementioned procedure yield a potential in C 1 . If one wishes to study inflationary cosmology on random potentials, one is commonly interested in the evolution of cosmological perturbations to compute correlation functions of the gauge invariant curvature fluctuation ζ. The latter can be measured by observations of the cosmic microwave background radiation (CMBR) or large scale structure surveys. Of particular interest are the twopoint function (power-spectrum) and the three-point function (a measure of non-Gaussianities). Since ζ is related to fluctuations in the inflatons at horizon crossing, see [33,34] for reviews, the correlation functions of ζ probe the properties of the inflationary potential around sixty e-folds before the end of inflation. The n'th correlation function is sensitive to the n'th derivative of the potential -for instance, at the level of the slow roll approximation the second derivative of the potential enters the scalar spectral index. Thus, a discontinuity in the 2'nd derivative of the potential, as induced by Dyson Brownian motion, leads to artefacts already at the level of the power-spectrum: as shown in [47][48][49] such jumps lead to extended oscillations in the power-spectrum; higher order functions are affected as well. Keeping δs 1 and thus δH ab sufficiently small, washes out these artefacts, since they are superimposed on top of each other as in [64]. However, such a brute force approach is computationally intensive, offsetting the main advantage of DMB, and potentially not entirely free of artefacts. Thus, it is desirable to have access to a random potential in the class C k if the k'th correlation function is to be computed. Even if one is not interested in cosmological perturbations but merely properties of the inflationary background trajectory, it is advisable to have k ≥ 2: for example, one might be interested in the final vacuum reached after inflation, as in [8]. To this end one needs to identify the presence of a local minimum. However, if the necessarily shallow region near the minimum is approached the gradient approaches zero; as a consequence, the background dynamics become dominated by the Hessian, not the gradient. We expect the sudden steps in H to hinder the proper identification of a minimum, since the artificial jumps prevent a smooth approach to it. This expectation can be seen numerically, and we plan to elaborate on this point in future work. To a lesser degree we expect this effect to arise during slow roll inflation as well, particularly if inflation is driven near a saddle point or maximum. In this section we generate a random potential V ∈ C 2 , such that the elements in the Hessian still obey the GOE, by adding the random fluctuations to the tensor of third derivatives instead of the Hessian. Such a potential is sufficient to discuss background dynamics, including questions pertaining to the final resting place, as well as the power-spectrum. A generalization to V ∈ C k is given in Sec. 4. A potential to third order Let us start by Taylor expanding V at p 0 to third order, To create a random landscape, we wish to truncate the series at third order and add a perturbation to via an appropriately chosen δv abc . Since the third order derivative tensor, and thus δv abc , has to be symmetric under permutations of abc, the Hessian automatically inherits the proper symmetries. As before, going along a path Γ from point to point, we can patch together a random potential; however, this time, the potential, the gradient and the Hessian remain continuous. If one wishes to create a potential in C k , one needs to go up to the k + 1'th order in the Taylor expansion, as done in Sec. 4. Imposing properties of the Hessian As explained in Sec. 2.2, the perturbation of the Hessian needs to obey (2.10) and (2.11) in order to create a Dyson Brownian motion random potential. Thus, we wish to consider δv abc such that the mean and the variance of the Hessian remain the same as if perturbations were added directly to the Hessian. Consider δv abc to be Gaussian random variables. Since the sum of Gaussian random variables yields a new Gaussian random variable with mean and variances equal to the sum of the respective quantities of the summed variables, it is possible to generate a perturbation of the Hessian δv ab with the desired properties leading to DBM. To be concrete, noting that 8) for N (a, m 2 ) a normal distributed random variable with mean a standard deviation m while k is a scalar, we can work out the needed mean and variances for δv abc such that (2.10) and (2.11) hold, yielding where Var(x) =< x 2 > − < x > 2 . Translating the above expressions into explicit conditions for the mean and variance of δv abc is complicated by the sum over c. To alleviate this hurdle and simplify the procedure, we perform a rotation in field space. Let us consider two distinct rotations/prescriptions to identify conditions for the means and variances of δv abc : 1. We rotate the field space such that one basis vector aligns with the step δφ so that the sum collapses to a single entry. After dividing by the step-length, we can identify the desired conditions on the means and variances. This prescription is easily generalized to generate V ∈ C k . 2. We rotate the field space such that the Hessian is diagonalized. As a consequence, off-diagonal means are zero while variances can be simplified. Once the conditions are imposed, we need to rotate back to the original coordinate system to perform the next step. While statistical properties of the Hessian are identical in both procedures, the actual conditions on δv abc differ. This is expected, since the number of independent statistical variables exceeds the number of conditions in (2.10) and (2.11) for k ≥ 2. Since the differences are delegated to a tensor not directly entering observables, we expect all prescriptions to yield consistent predictions, e.g. for the power-spectrum. Rotating field space To align a basis vector with δφ We wish to rotate a basis vector e i in field space in the direction of the vector δφ. For e i we choose the direction in which δφ has its maximal component, i.e. |δφ i | > |δφ c | for c = i (if the maximal component is degenerate, we choose the one with the lowest index). To perform this rotation, we identify two orthonormal vectors in the plane spanned by e i and δφ, which we extend to an orthonormal basis of R D via the Gram-Schmit procedure (this step is not unique). With respect to this basis, we consider the rotation by the required angle θ in the plane spanned by the two vectors of interest and use the identity on the space generated by the rest of the orthonormal basis. Since by definition the basis vector e i is already of unit norm, we begin by normalizing the vector δφ, u ≡ δφ |δφ| . (3.11) To create another orthonormal vector in the plane of interest, we define v ≡ e i − (u · e i )u |e i − (u · e i )u| , (3.12) in line with the Gram-Schmidt method. The projection operator onto the plane spanned by e i and δφ is and Q = I − uu T − vv T (3.14) projects onto the D − 2 dimensional perpendicular subspace of R D . Since the rotation takes place in the target space of P , we can write the rotation matrix as where [u v] is the D × 2 matrix with u and v written as column vectors. R θ is the normal rotation matrix in two dimensions, i.e. R is the desired rotation matrix aligning e i with δφ while R −1 is used to rotate back to the original coordinate system. Denoting quantities in the rotated coordinate system by an underscore, we have for example To avoid cluttering our notation, we suppress the underscore in the following whenever it is clear in which coordinate system computations are performed. Imposing constraints on the mean and variance of δv abc To impose (3.9) and (3.10), we go to the rotated coordinate system introduced in Sec. 3.3. All expressions and tensor components in this section are given in this coordinate system (denoted by an underscore in Sec. 3.3) if not stated otherwise. We first recall that the rescaled step length is given by where we keep δs = constant from step to step 8 . As δφ and e i are aligned (the index i is not a free index in this section), we have As a consequence, (3.9) becomes Noting that v abi | p n−1 = v abi | p n−2 + δv abi | p n−2 we arrive at the D(D − 1)/2 independent conditions for the mean Since v abc is completely symmetric, the following components are determined by symmetry < δv ibc | p n−2 > = < δv cbi | p n−2 > , (3.23) < δv aic | p n−2 > = < δv aci | p n−2 > . (3.24) Since a completely symmetric tensor of rank k and indices ranging from 1 to D has independent components, there are N (3, D) − N (2, D) = D(D − 1) 2 /6 hitherto unspecified entries in δv abc | p n−2 . These leave the conditions (3.9), and thus the Hessian, unaltered, but they affect the overall smoothness of the potential. Thus, imposing the GOE for the Hessian does not uniquely determine the distribution of δv abc | p n−1 or the class of random potentials. The only constraint on these unspecified entries is that they should lead to a symmetric third order tensor. The simplest choice is to leave these components unchanged in the rotated coordinate system. As a consequence, the only variation of these entries in the original frame stems from rotating to-and-fro. This choice leads to the "smoothest" potential satisfying (3.9). Following a similar line of reasoning, we can use (3.10) to set the variance of δv abc | p n−2 . Using we arrive at where used (3.22). Again, symmetries determine the values of variances related by permutations of a, b, i, while D(D − 1) 2 /6 entries are up to our choice. We again choose to leave these entries unchanged in the rotated coordinate system. Conditions (3.22) and (3.27) set the mean and variance of δv abc in the rotated coordinate system S and they are deceptively simple, but it should be noted that these equations are not tonsorial. Thus, they only hold in S. However, since δv abc is a tensor, it can be rotated back to the original coordinate system S via (3.18). In S the simplicity of the imposed conditions is hard to spot; indeed, working entirely in S, it is hard to distinguish between conditions dictated by (3.9) as well as (3.10) and free choices made on our part. To summarize, we arrive at totally symmetric, Gaussian perturbations δv abc such that the Hessian obeys (2.10) and (2.11), which in turn define a generalized Dyson Brownian motion random potential V ∈ C 2 as opposed to V ∈ C 1 . A generalization to V ∈ C k along the same lines is straightforward, see Sec. 4. Rotating field space to diagonalize the Hessian By relegating perturbations to the tensor of third derivatives, we were free to choose D(D − 1) 2 /6 entries to our liking, since they did not influence the statistical properties of the Hessian. To check whether or not this choice has strong impact on other properties of the resulting potential, we provide in this section another recipe based on rotating the field space such that the Hessian is diagonalized. The required rotation matrix R is a D×D matrix with rows given by the eigenvectors of the Hessian, so thatH = RHR −1 (3.28) is diagonal in the rotated space. The inverse of this matrix R −1 is used to rotate back to the original coordinate system. The rotation to and from the rotated space is again governed by equations (3.17) and (3.18) respectively. Imposing constraints on the mean and variance of δv abc In this section, all the quantities are given in the rotated coordinate system of Sec. 3.4 unless stated otherwise and we omit the underscore notation. Before imposing (3.9) and (3.10) in the rotated coordinate system, we note that the only non-zero components of the Hessian are the diagonal ones, i.e., v aa (a repeated index a is not to be summed over in this section). Thus, equations (3.9) and (3.10) can be re-stated as for the diagonal elements and for the off-diagonal elements (a = b). Again, we have the freedom to choose undetermined components of v abc to our liking, as long as the resulting tensor is completely symmetric under permutations of the indices abc. Our choice is motivated by the following wishes/observations: • We wish to keep the potential as smooth as possible. Further, rotational symmetry is desirable. • We observe that v aaa is present only in the equations of the diagonal elements while other entries (v aab and v abc ) are present in the diagonal as well as off-diagonal elements. Therefore we choose for the means. Since v abi | p n−1 = v abi | p n−2 + δv abi | p n−2 , we have For the variance, we choose (3.39) This choice is straightforward, except for the variance of v aaa , which can be read off once symmetries under rotations is imposed and everything else is fixed. Once the elements of the third order tensor are set in the rotated frame, we rotate them back to the original one by using the inverse rotation matrix R −1 . Generating random potentials V ∈ C k To generate a potential V ∈ C k while maintaining a Hessian in the GOE, we need to continue the Taylor expansion in (3.2) to order k + 1, while adding the perturbation to the k + 1'th derivative tensor. To keep our notation economical, let us introduce the multi-index C j , 2) . . . so that e.g. the third derivative tensor reads v abc = v abC 1 . We kept the indices a, b explicit, since we want to impose conditions (2.10) and (2.11) onto the Hessian v ab . We add perturbations via staying with our convention that δv abC k−1 | p n−1 is added at p n and thus relevant for the potential along Γ from point p n onward. From here on we work in the rotated coordinate system introduced in Sec. 3.3, i.e. all tensors are assumed to be transformed via (3.17) (omitting the underscore), and we assume a constant distance between perturbations so that δφ = e i δs/Λ h = constant. Note that i is not a summation index in this section but designates the direction of δφ in the rotated coordinate system. As a consequence, we may write where the multi index becomes (4.8) Since the perturbation enters only in δv abC k−1 | p n−1 , the remaining deterministic terms can be taken out of the expectation value to yield where we used (4.4). Equating the above with the rhs. of equation (2.10) yields (4.10) Since v abC k−1 is completely symmetric, all means related to < δv abC k−1 |p n−2 > via a permutation of the indices, π(abC k−1 ) ≡ π(abi . . . i), are identical to the above expression. We leave the remaining N (k + 1, D) − N (2, D) elements of v abC k−1 unperturbed, i.e., impose zero mean and variance, as we did for V ∈ C 2 . The above result reduces to (3.22) for k = 2. Similarly, we can derive the variance: since only v abC k−1 contains a perturbation we get Var δv abC k−1 | p n−2 . (4.12) Plugging the above into (2.11) leads to Variances with indices given by a permutation π(abi . . . i) are identical to the above expression, while all remaining variances are set to zero. (4.13) reduces to (3.27) for k = 2. Evidently, it is straightforward to set the mean and variance of δv abC k−1 in the rotated coordinate system such the Hessian is a matrix in the GOE and V ∈ C k . The above values need to be transformed back to our original coordinate system via (3.18), in which the conceptual simplicity is hidden. 5 Examples, comparisons and discussion of V ∈ C i with i = 1, 2, 3 In addition to the known method of generating V ∈ C 1 via Dyson Brownian motion, see Sec. 2.2, we have derived two distinct methods to generate random potentials V ∈ C 2 , one of which we generalized to provide V ∈ C k for arbitrary k ∈ N; thus, we have at our disposal: 4. V ∈ C k , generated via rotating field space to align a basis vector with δφ and delegating perturbation to the k + 1'th derivative tensor, yielding (4.10) and (4.13) for the means and variances. In this section we would like to compare the resulting potentials for k = 1, 2, 3 with each other based on a few selected examples to highlight general features. As explained in Sec. 2.3, we use Λ h = 0.1, Λ v = 1, δs = Λ h /100, σ 2 = 2/D, and chose the direction of steepest descent to provide the path Γ along which the potential is generated. We choose D = 5, so that we are well within the multi-field regime and avoid self intersecting Γ, yet plots, such as the ones depicting eigenvalue-relaxation of the Hessian, remain clear. Further, to ease comparison, we use the same initial configuration in plots (height, slope and eigenvalues of the Hessian); we use the words "iterations" and "steps" interchangeable. We varied initial conditions and the dimensionality of field space to make sure that the plots depicted here are representative. In Fig. 2, we show exemplary plots of the potential along the path Γ: we observe that V ∈ C 2 generated via either method yields potentials comparable to the one originating from Dyson Brownian motion. However, potentials V ∈ C 3 , panel (d), and to a lesser degree V ∈ C 2 in panel (c), are generically more convex than the other ones. While these differences are minor, they may be important if questions pertaining to inflationary cosmology are to be addressed (only flat regions can support inflation); thus, for applications in inflationary cosmology, it is crucial to check the sensitivity of predictions to the method used to generate the potential. We leave such an investigation to future work. The main desired difference between potentials is their differentiability. This difference becomes evident if we plot the eigenvalues of the Hessian along the path, as in Sec. 2.3. The corresponding eigenvalues of the potentials in Fig. 2 are shown in Fig. 3. As expected for potentials whose Hessian is a member of the Gaussian orthogonal ensemble for distances above the horizontal correlation length Λ h , we observe eigenvalue relaxation once the path-length exceeds Λ h ; in other words, the initial values of the eigenvalues are usually forgotten after O(Λ h /δs) = O(100) iterations. While such plots look qualitatively the same to the naked eye, a close up, as shown in Fig. 4, reveals the most important quantitative difference: potentials V ∈ C 1 show a jump of the eigenvalues after each step. These discontinuities are intrinsic to Dyson Brownian motion and, as discussed in Sec. 2, can be disastrous if cosmological perturbations generated during inflation are to be computed (artefacts arise, such as ringing patterns in correlation functions). Further, such potentials are not well suited to search for minima, which restricts their usability to model string theoretical landscapes if one's goal is to find suitable vacua for our universe. As we go to V ∈ C 2 , panel (b) and (c) of Fig. 4, the eigenvalues change smoothly, as expected. Since perturbations are delegated to the tensor of third derivatives, we still observe kinks, but these kinks are harmless if one wishes to compute the power-spectrum or search for minima. Nevertheless, they lead to spurious signals in higher order correlation functions, which motivated us to create potentials in even higher differentiability classes. The eigenvalues of a potential V ∈ C 3 are plotted in panel (d) of Fig. 4; while the slope indeed changes continuously, as desired, we observe an artefact of a different kind: the perturbations added to the 4'th derivative tensor add spurious oscillations to the eigenvalues with a wavelength set by the step-length λ ∝ δs . (5.1) The cause of these oscillations is similar to over-fitting data with a polynomial of high degree, and we expect them to be problematic for the computation of the bi-spectrum. Can we eliminate this effect? Since we stitch together different Taylor expansions after each step, and these oscillations trace back to changes in the 4'th derivative tensor, one may hope to reduce the step-length to the point where the contribution of the forth order tensor are well below the ones of the third order tensor when the next patch is reached. For the depicted eigenvalues of the Hessian (V ∈ C 3 ), we stitch together parabola -thus, ideally, the step-length should be taken so small that the presence of a maximum/minimum of the particular Taylor expansion is unlikely to occur within the next step, i.e. < δv abii > δs 2 O(v abi δs, v abii δs 2 ). However, since the means of the perturbations are adjusted according to (4.10), this condition can not be satisfied generically due to the contribution < δv abii > ∼ O(v abi /δs): the left and right hand side of our tentative . Eigenvalue evolution between successive steps for δs = Λ h /100 and different methods of creating random potentials: (a) Dyson Brownian motion causes jumps at each step; (b) and (c): either method of creating V ∈ C 2 leads to a continuous evolution of the Hessian, sufficient for several cosmological applications (hunt for minima, computation of power-spectrum); (d) For k ≥ 3 spurious oscillations arise, that can be problematic for applications. While they are intrinsic to the method we used to create such potentials, their amplitude can be made arbitrarily small by reducing δs, see Fig. 5. condition are generically of the same order. Thus, the presence of such oscillations is intrinsic to the method by which we create V ∈ C 3 . To test this explanation, we varied δs from Λ h /10 to Λ h /10 4 in Fig. 5, and indeed, the presence of oscillations is not altered by a reduction of δs, while the wavelength in field space is set by the step-length with a proportionality constant of order one. However, we also observe that the amplitude of oscillations diminishes as the step-length is decreased. To leading order in δs, we can estimate this amplitude via that is A ∝ δs. The observed reduction is slightly weaker, which is understandable since our estimate did not take into consideration the variance of the perturbations in (3.36)-(3.38). However, it is clear how to proceed in applications, such as the computation of the bi-spectrum: to minimize the effect of oscillations, one needs to demand (at least) that A is considerably smaller Figure 5. Eigenvalues for potentials V ∈ C 3 for varying step size δs. Spurious oscillations with a wavelength comparable to δs are visible. While their presence is intrinsic to the method used to create the potential, one can diminish their amplitude to any desired level by reducing δs accordingly. than the eigenvalues under consideration. We normalised the potential such that eigenvalues are of order one, so that we need to demand A 1 which directly translates into a condition for δs. Practically, one may create a few plots of the eigenvalues as in Fig. 5 to decide on an appropriately small δs (in our case, δs Λ h /10 4 appears appropriate). If observables such as the bi-spectrum are to be computed, one should check whether reducing δs further causes leading order changes in results (it should not). A similar line of reasoning needs to be followed for k > 3. Alternatively, one may contemplate altering the method whereby the potential is generated. For example, one may consider applying a running average to the potential as it is being created to eliminate the effect of noisy artifices on scales of order δs. We leave such improvements to future studies. To summarize, while potentials V ∈ C k with k ≥ 3 are not free of problems, they still offer an improvement over potentials in a lower differentiability class. In the end, one may pick the potential with the lowest k that is sufficient for the task at hand in order to retain the computational advantage that locally created random potentials offer over globally created ones. Potentials V ∈ C 2 are sufficient to hunt for minima on landscapes in string theory and enable a computation of the power-spectrum of cosmological perturbations. Further, they are free of the artificial oscil-lations that occur for k ≥ 3. Thus, we plan to use such potentials in forthcoming publications on cosmological applications. Conclusion We derived novel methods to generate random functions in a desired differentiability class along a trajectory by extending the prescription of Dyson Brownian motion (DBM). As in DBM, the Hessian of these functions evaluated at well separated points is a random matrix in the Gaussian orthogonal ensemble (GOE). We were motivated to construct such functions to model complicated potentials on the string theoretical landscape (a field space of high dimensionality) for cosmological applications. Particularly potentials V ∈ C 2 are of interest to us, since they enable the search for minima as well as the study of cosmological perturbations and the computation of the power-spectrum (the two-point correlation function). Potentials V ∈ C k with k ≥ 3 are needed to compute higher order correlation functions. The method of constructing such potentials inherits the basic idea from DBM to stitch together local patches wherein the potential is given as a truncated Taylor series. Whenever the next patch is entered, random variables are added to the k + 1'th derivative tensor (k = 1 for DMB, so perturbations are added to the Hessian). For DBM, the statistical properties of these variables are entirely determined by the desired ensemble (the GOE) of the Hessian. However, for k ≥ 2, additional freedom is present. To explore this freedom, we provided two distinct prescriptions for k = 2: the first generates potentials that invoke the least number of random variables and thus provides, in a sense, the smoothest potentials. This prescription is readily extended to arbitrary k ∈ N. The second prescription perturbs primarily in the principal directions of the Hessian. It should be noted that all such potentials are indistinguishable if the statistical properties of the Hessian are used as a discriminator. We followed with a small selection of examples to highlight the properties of potentials with k = 2, 3: for k = 2, we found potentials generated via the two different methods to be qualitatively indistinguishable and free of artefacts; we plan to used both of them in cosmological applications in a future publication. The k = 3 case is somewhat problematic: we observed spurious oscillations of eigenvalues with a wavelength given by the step-length after which perturbations are added. These artefacts are intrinsic to the methods used, but can be made arbitrarily small by reducing the step length. While not optimal from a computational efficiency point of view, such potentials can at least in principle be used to compute higher order correlation functions. While our motivations stem from cosmology, the method to construct such random functions is general and may be of use in other areas of science.
10,658.2
2014-09-17T00:00:00.000
[ "Physics" ]
Assessment of Jiangsu Regional Logistics Space Nonequilibrium Situation by Boosting and Bagging Algorithms is study aims to solve the problem of unbalanced regional economic development in Jiangsu and deeply excavates the relevant theories of regional logistics and the unbalanced spatial situation.e classiƒcation of regional logistics and spatial disequilibrium and the dimensions involved are studied. At present, the development status of Jiangsu logistics has locked the main reason for the unbalanced situation of the regional logistics space in Jiangsu.e basic principles, application scope, andmethods of Bagging and Boosting algorithms are deeply studied.en, the Jiangsu logistics space nonequilibrium situation assessment model is established based on the Bagging and Boosting algorithms, and the input data is sorted and summarized. Finally, the results are obtained through experiments evaluating the model and analyzed and summarized. e core issues, the factors a‡ecting development, and the uneven development situation have been comprehensively assessed. e conclusion drawn is as follows: the imbalance of the logistics productivity of various regions in Jiangsu will inevitably lead to the imbalance of the regional logistics space in Jiangsu, which a‡ects the coordinated and healthy development of the regional logistics in Jiangsu.ese conclusions are signiƒcant to the inherent causes and e‡ects of regional logistics spatial disequilibrium and can promote the coordination, synergy, and common development of Jiangsu logistics regions. Introduction In recent years, the industrial chain and supply chain of various countries have been restructured at an accelerated pace. e economic structure is changing, and uncertainties and risks are also increasing, but the task of economic growth is still arduous. e foundation and supporting roles of logistics in the modern market economy are becoming more and more signi cant, and it has been theoretically supported in the factors of economic growth [1]. China's logistics has been a ected by the ideology of "emphasizing production and neglecting circulation" and restricted by the divided circulation system. Logistics has not received due attention and development. It was not until 2001 that modern logistics o cially entered China's economic stage. Development of China's logistics has mainly gone through three historical stages: the traditional planned storage and transportation stage (after the founding of the People's Republic of China to before the 1980s), the introduction stage of logistics concepts (from the 1980s to the end of the 1990s), and the initial development stage (the beginning of the 1990s to now). ese three stages completely re ect the development process of China's logistics and re ect China's transformation from a weak logistics country to a strong logistics country. From borrowing and learning from excellent experiences to promoting China's own logistics development, the level of logistics has been improving, but there are still many areas for improvement. From a practical point of view, regional logistics has become the key to regional economic growth. e factors of regional economy have obtained theoretical support and practical application in the research. In recent years, problems such as unbalanced development of China's logistics have become more prominent, which have become the core factors for the unhealthy development of the logistics industry. erefore, this also makes the optimal planning of industrial resource distribution and the coordinated development of regional economy cannot be carried out smoothly [2]. Jiangsu is a typical province in China whose geographical location spans south and north. erefore, the unbalanced state of regional logistics development in Jiangsu can represent China's typical unbalanced regional logistics development. Boosting and Bagging algorithms can be used to study it. Boosting is an effective ensemble learning method suitable for supervised classification tasks. Among the more popular Boosting methods at present, there are mainly AdaBoost, Savage Boost, Tangent Boost, and Taylor Boost [3]. e AdaBoost algorithm is discussed. In 1996, the Bagging algorithm was proposed by Breiman et al. [4]. e algorithm has become a well-known representative of ensemble learning methods. Bagging algorithm is based on the bootstrapping sampling method to self-resample from training data [5]. When performing sampling operations, the convenience of the Bagging method is that it does not need to know the true distribution of the samples [6]. In the sampling process, independent repeated sampling with replacement is performed, and the obtained multiple training sample sets are used. At present, the economic development of Jiangsu province is still in a period of important strategic opportunities. e opportunities and challenges faced by the development of its logistics industry have new changes. e southern Jiangsu is mainly dedicated to the dedicated line, and the northern Jiangsu is mainly stowage. e logistics parking lots in Nanjing, Wuxi, Changzhou, and Wuxi in southern Jiangsu are developed, with Wuxi being the most developed, and there is logistics development in southern, central, and northern Jiangsu. e regional and unbalanced characteristics of it are obvious [7]. In recent years, the Jiangsu region has gradually shown problems of unbalanced economic development, resources, development location, logistics layout, development scale, and development level. According to the history and the facts of Jiangsu regional logistics development, Jiangsu territorial space planning in the new century is taken as the foundation. e present situation of the logistics foundation and development conditions of 13 cities in the three major regions are different. e allocation of logistics resource elements is differentiated, focusing on the integrated development trend of the Yangtze River Delta to explore the unbalanced situation and endogenous evolution mechanism of Jiangsu regional logistics space, and based on the empirical results, the system design and countermeasures of unbalanced and coordinated development are put forward. e innovation point lies in the construction of a new framework of integrated analysis from the perspective of the disequilibrium situation of regional logistics space, from the two levels of region and city, and from the time and space dimensions to reveal the disequilibrium nature of regional logistics space more vividly and comprehensively. ese contents supplement the connotation of regional logistics space nonequilibrium, clarify its real status and development context, form a new three-dimensional understanding, and improve the regional logistics space nonequilibrium research system. is research urgently needs to enhance the endogenous power of regional logistics development, promote high-quality development of regional logistics in Jiangsu, and solve the unbalanced and insufficient regional logistics. e new requirement is also an important way to improve the pertinence and effectiveness of the coordinated development policy of regional logistics in Jiangsu and to better meet the construction of a modern economic system and the people's growing demand for logistics services. Additionally, these conclusions provide useful experience and reference for narrowing the gap in China's regional logistics development, improving the efficiency of resource allocation, and solving the problem of unbalanced and uncoordinated regional development, and have important practical application value. Current Situation of Unbalanced Situation of Regional Logistics Space in Jiangsu. Regional logistics is a system that can adapt to regional characteristics and functions and meet regional economic development. It is composed of regional logistics networks, information support, and organizational operations. e situation of regional logistics is initially a theoretical assessment for developing countries to achieve economic development goals. is theory has become the theoretical basis that is often cited and referenced in regional development and planning [8]. e development of the social division of labor and regional division of labor is constantly changing the logistics industry structure in various regions. e formation, development, and substitution of the logistics industry make the structure of the logistics industry change all the time to promote the continuous improvement of the industry level. Judging from the current specific situation in China, the logistics industry includes three aspects, and its industry has three properties, as shown in Figure 1. In Figure 1, the logistics industry involves a variety of industries. It has three properties, and its content is different [9]. Its basic industry is composed of different transportation lines, junctions, nodes, and terminals, which provide the basis for the operation of each link of the economic system, as shown in Figure 1(a). e logistics equipment manufacturing industry is an element that provides means in the logistics production labor and can use more advanced technology to provide modern equipment to the traditional manufacturing industry. e logistics information industry is the core in providing software and hardware support for the system, including system management and services, and is a combination of information technology and communication technology [10]. Regional logistics generally refers to regional logistics and local logistics. Its features are shown in Figure 2. In Figure 2, spatial resource differences are the economic basis of regional logistics, including natural and social resources. ere are specific natural and social resources in different regions [11]. Spatial resources in any region cannot be completely balanced, especially in regional logistics. e level of logistics service is in line with the economic development, and the division of its regions is mainly determined according to the degree of economic development. e contents of different regional logistics systems are different, and in essence, it is a structure that is interconnected and restricted to each other. In other words, some regional logistics system integrity may be higher, and some may be lower. However, there is a certain number of logistics systems between regions. At present, China's understanding and mastery of the concept of unbalanced development are defined as the unbalanced or unbalanced phenomenon or significant difference in the distribution of resources, accumulation of wealth, economic income, and exercise of rights among different regions, industries, and groups in the development process [12]. e disequilibrium theory has several characteristics, as shown in Figure 3. In Figure 3, the characteristics of the nonequilibrium situation are obvious. Firstly, it has greater realism. e disequilibrium theory breaks through the shackles of general equilibrium theory, faces the disequilibrium phenomenon in real economic life, and draws some theoretical conclusions that are more realistic. It provides a rigorous framework for research resource allocation [13]. Regional unbalanced Mathematical Problems in Engineering development is related to balanced regional development, and it is divided according to the law of unbalanced development of regional economy, not evenly. China's regional economic unbalanced development strategy is gradually maturing, maintaining a moderate economic gap between regions within a certain period and developing in an orderly manner, ultimately realizing the common development of the national economy. e data model of its core development is shown in Figure 4. In Figure 4, the advantage of this model is that it is conducive to highlighting key points, concentrating advantageous resources, and allowing some regions, some industries, and some people to develop. e disadvantage is that this model has led to the widening of the regional, urban, and rural income gap, unbalanced economic development, and imbalanced income structure [14]. e nonbalanced development strategy refers to the strategy of investing limited resources in the regions and industries with higher efficiency first to obtain high-speed growth of the regional economy and promote the development of other regions and industries. Jiangsu province has a huge geographical advantage. It spans from north to south and, together with Shanghai, Zhejiang, and Anhui, constitutes one of the world-class urban agglomerations in the Yangtze River Delta. e development of various transportations in Jiangsu province is relatively high in the country. It has four comprehensive national transportation hubs, enabling the rapid development of regional logistics. At present, the logistics industry has become the key growth point of Jiangsu's economic development. e overall logistics industry is developing rapidly, but the spatial imbalance of regional logistics is still a problem that plagues the coordinated development of regional logistics in Jiangsu [15]. e analysis and measurement of the spatial differences of regional logistics in Jiangsu, the in-depth discussion of the spatial differences in the development of logistics in various regions, the static comparative analysis, and the trend of trend change characteristics are indispensable links to explore the assessment of the nonequilibrium situation of regional logistics in Jiangsu. According to incomplete statistics, there are 49 logistics parks in Jiangsu. Among them, there are ten logistics parks in Wuxi, five logistics parks each in Xuzhou and Changzhou, and four logistics parks in Yancheng, Zhenjiang, Nanjing, and Suzhou. e development of key logistics parks in Jiangsu is generally good. Whether it is infrastructure, operational efficiency, or park functions and service capabilities, they are constantly improving. With the in-depth implementation of the "Internet +" strategy, the key logistics parks in Jiangsu province have strengthened innovationdriven development and gradually developed into diversified formats such as "Internet + logistics" and "logistics + business", instead of relying solely on traditional modes of transportation, storage, and trade. is model accelerates the transformation and upgrading of the park [16]. In addition, the development of logistics parks in Jiangsu province is facing four major problems and challenges. e specific problems and challenges are shown in Figure 5: In Figure 5, the construction of the Jiangsu logistics area has problems such as inadequate implementation and unreasonable project layout. e phenomenon of duplication of content and construction of logistics parks is common. Some even blindly occupy land out of actual market demand. Problems such as insufficient innovation, single business, and brand influence in the park are obvious, resulting in low overall operating efficiency [17]. At present, less than one-third of all key logistics parks in Jiangsu province has developed diversified businesses, and the proportion of business volume is relatively low. In response to the above problems, the freight situation of the three major regions in Jiangsu is sampled and collected. e standard deviation and mean data of the freight situation are shown in Figure 6. In Figure 6(a), from the perspective of the road freight volume of the three regions, there are significant differences and large fluctuations in the road freight volume of the three regions. Among them, southern Jiangsu and northern Jiangsu have staggered growth, and northern Jiangsu > southern Jiangsu > central Jiangsu, and northern Jiangsu ranks first in the freight volume of the three major regions [18]. e standard deviation of the northern and southern Jiangsu regions reached 6632.22, indicating that the difference in road freight volume between the northern and southern Jiangsu regions was the largest. In Figure 6(b), there are significant differences in road freight turnover between the southern Jiangsu, the central Jiangsu, and the northern Jiangsu. Northern Jiangsu > southern Jiangsu > central Jiangsu. e standard deviation between the northern Jiangsu and the central Jiangsu even reached 459837, indicating that the difference in road freight turnover between the northern Jiangsu and the central Jiangsu is the largest [19]. Bagging and Boosting Algorithms. Bagging and Boosting algorithms combine existing classification or regression algorithms in a certain way to form a more powerful classifier. is is a method of assembling a classification algorithm. is is a method of assembling weak classifiers into strong ones. Bagging is the bagging method [20], and its algorithm flow is shown in Figure 7. In Figure 7, there is indeed no connection between the weak learners of Bagging like Boosting [21], which is characterized by "random sampling." Random sampling is to collect a fixed number of samples from the training set, but after each sample is collected, the samples are put back. Previously collected samples may continue to be collected after being returned. For the traditional Bagging algorithm, the same number of samples as the training set sample m is generally collected randomly [22]. e number of samples obtained in this way is the same as that of the training set, but the content of the samples is different. If the training set of m samples is randomly sampled N times, the N sample sets are different due to randomness. e basic classifier is usually learned from an existing algorithm, so ensemble learning does not create new algorithms but combines existing algorithms. e integration adopted is shown in the following equation: Among them, M is the number of samples, x and y are the label values, and y belongs to the set [-1,1]. k is usually set to 100, and it is the number of random tree models. P is the integrated value of the entire algorithm. Mathematical Problems in Engineering 5 Error is the assumption error of the weak classifier, x and y are both label values, y belongs to the set [-1, 1], and F(x) is the real label. i is a single sample number belonging to the set [1, N]. C is the correct computation result of the classifier. e Boosting algorithm reduces the weights of the samples that are classified incorrectly by the weak classifier in the previous round by increasing the weights of the samples that are wrongly classified by the weak classifier in the previous round [23], so that the classifier is more accurate to the wrongly classified data. e additive model linearly combines the weak classifiers, and the weight of the classifier with a small error rate is increased by a weighted majority voting method. e weight of the classifier with a large error rate is decreased. Additionally, the residual error is gradually reduced by fitting the residual error, and the models generated at each step are superimposed to obtain the final model. e flow of the Boosting algorithm is shown in Figure 8. In Figure 8, the Boosting algorithm is an algorithm that promotes a weak learner to a strong learner [24]. For a complex task, the sum of multiple judgments is more accurate than any single judgment. e working mechanism of the algorithm is to train a weak learner from the initial training set and then adjust the distribution of training samples according to the performance of the weak learner. e samples that are misclassified by the weak learner before are adjusted later. en, the next weak learner is trained based on the adjusted samples, and so on repeated until the criterion is met. In general, the boosting method builds a model in a stepby-step and iterative manner. e weak learner built in each iteration step is to make up for the deficiencies of the existing model and finally generate a strong learner [25]. Both Bagging and Boosting algorithms combine existing classification or regression algorithms in a certain way to form a more powerful classifier. is is an assembly method of a classification algorithm, that is, a method of assembling weak classifiers into strong classifiers. But the two algorithms have different differences, as shown in Table 1. Establishment Model. In order to be able to determine the pivot and pivot cities in the spatial pattern of regional logistics development, the pivot, that is, the main logistics node city generally chooses the city with a higher "quality" of urban logistics. e development level of urban logistics is higher, the logistics demand is big, the basic logistics infrastructure is sufficient, and the comprehensive logistics ability is strong. e pivot point is a secondary logistics node city. Its logistics development level, facility status, market demand, and functional orientation are weaker than the pivot city, which is a subsidiary of the pivot. Under the condition that these theoretical foundations and algorithms are available, there is no unified standard for evaluating the unbalanced situation of regional logistics space. e spatial nonequilibrium situation of regional logistics reflects the comprehensive strength of regional logistics and is a comprehensive index of complexity. e key step is to establish a complete evaluation model to comprehensively evaluate the characteristics and status of regional logistics space nonequilibrium in Jiangsu, which can measure the comprehensive strength of regional logistics development. e indicators to measure the comprehensive strength of regional logistics development are mostly based on the quality of various indicators and the reliability of the data sources. e model selects three evaluation indicators, namely, the level of regional economic development, the scale of regional logistics supply, and the construction of regional logistics infrastructure, as shown in Figure 9. In Figure 9, the scale of regional logistics supply is an important indicator reflecting the ability of regional logistics demand, and it is also a manifestation of the vitality of the regional logistics development. e regional logistics infrastructure construction index is the embodiment of the development status and potential of the regional logistics industry and is an important basic support for the flow of goods in the regional logistics spatial pattern [26]. Regional logistics quality, regional economy, transportation, warehousing, etc. are indicators for evaluating comprehensive strength. e selection of indicators is completed through the gravity model, which is a space interaction model, derived from Newton's law of universal gravitation. In other words, the gravity between two objects is proportional to its mass and inversely proportional to the distance. Later, by establishing a relatively complete and simple economic application to predict the ability of spatial interaction in the e training set is selected with replacement, and each round of training sets selected from the original data set is independent. Sample weight e weight of the sample is continuously adjusted according to the error rate-the greater the error rate, the greater the weight of the sample. Use uniform sampling, with equal weights for each sample. Prediction function Each weak classifier has a corresponding weight, and a classifier with a small classification error will have a larger weight. All predictors have equal weights. Parallel computing Each prediction function can only be generated sequentially because the latter model parameters need to be combined with the model results of the previous round. Individual predictors can be generated in parallel. Evaluation index of unbalanced situation of regional logistics Space in Jiangsu Province Regional economic development level Logistics quality city clustering Construction of regional logistics and transportation facilities Figure 9: Evaluation indicators of the nonequilibrium situation of Jiangsu regional logistics space. field of economics, the theory is further developed and extended. In recent years, many scholars have successfully confirmed empirical research on trade exchanges, population migration, spatial distribution, and other aspects. is article draws on the core idea of the gravity model, constructs the urban gravity model of regional logistics in 13 cities in Jiangsu from two aspects of urban logistics "quality" and inter-city logistics distance, and selects indicators. According to the constructed "quality" evaluation index of Jiangsu urban logistics, the weight and information entropy of the three first-level indexes of the entropy weight calculation method are used, and the ranking is carried out. e indicators of the regional economic development level, regional logistics supply scale, and regional logistics infrastructure construction of 13 cities in Jiangsu province are derived using the formula of the seventh step of the entropy weight method. In this experiment, the amount of information on each indicator is used to determine the weight of each indicator [27]. In order to avoid the interference of other factors, the model does not select 13 prefecture-level cities but only three representative regions in Jiangsu and conducts normalized data processing and standardized processing on the indicators one by one [28]. When the entropy weight calculation method is used to normalize and standardize the indicators' data, the calculations are often used, as shown in (5) and (6): where P ij is the probability of the ith city node appearing and e j is the entropy value of the indicator. d j is the information entropy redundancy, w j is the weight value of each indicator, and s i is the composite score for each city. e logistics interaction force between the two cities can be judged through calculation. However, the formation of the regional logistics pattern in Jiangsu is not comprehensive only by determining the pivot and pivot cities. It is also necessary to analyze the influence scope of the logistics between the pivot cities. erefore, combined with the calculation results of the inter-city logistics gravity measurement and cluster analysis, the logistics membership function is used to determine further the radiation range of its axis city logistics activities. Analysis of Experimental Results Convergence effect results of the nonequilibrium situation assessment model: the nonequilibrium evolution process of Jiangsu regional logistics space is analyzed. In the discrete degree of nonsituational development of logistics space in Jiangsu Province and the three major regions, the convergence index represents the standard deviation of the logarithm. t refers to the passage of time. If the convergence index value tends to decrease with the passage of time, it means that there is convergence in the nonequilibrium of regional logistics space in Jiangsu. Otherwise, it diverges. e convergence index and trend of the nonequilibrium situation of the Jiangsu regional logistics space are shown in Figure 10. In Figure 10, the logistics space in the three major regions is in a state of unbalanced development. In the early stage, the unbalanced situation of regional logistics space in Jiangsu changed from divergence to convergence, but the fluctuation range is obvious and some fluctuations were very large. Both the southern Jiangsu region and the northern Jiangsu region showed a trend of rapid rise and then declined, and the central Jiangsu region is more moderate. In the midterm, the nonequilibrium situation of regional logistics space in Jiangsu has changed from divergence to convergence, and the southern and central Jiangsu regions have shown a trend of rapid rise and then decline. e fluctuation range of the northern Jiangsu is relatively flat, showing a trend of convergence. In the later period, the convergence index of the central Jiangsu region showed a divergent trend from weak fluctuations and then began to converge, and the overall performance is stable. To sum up, there is no convergence effect due to the nonequilibrium of regional logistics space in Jiangsu. Northern Jiangsu has the Mathematical Problems in Engineering fastest divergence speed, followed by the central Jiangsu. e unbalanced situation of the overall regional logistics space is obvious in Jiangsu. Analysis of the evaluation experiments of various indicators of the nonequilibrium situation in logistics space: through the experiment of the nonequilibrium situation convergence index, various indicators of Jiangsu regional logistics space nonequilibrium situation are further evaluated and tested. at is, each index belonging to different regions is subjected to cluster analysis. Cluster analysis is an important display form of regional disequilibrium in logistics space. e experiment passed 12 cities in the three regions, numbered 1-12, corresponding Suzhou, Nanjing, Yangzhou, Yancheng, Suqian, Taizhou, Zhenjiang, Huaian, Wuxi, Lianyungang, Xuzhou, and Nantong. ese regions are used as independent variables for each region to be clustered. e dependent variable is the number of clusters. e experimental results of the cluster analysis of this model are shown in Figure 11. In Figures 11(a) and 11(b), the experimental results of each index show the characteristics of nonequilibrium. In Figure 11(a), the uneven data on the level of economic development and logistics quality indicate that the vitality of logistics development in each region is uneven, the flow of logistics items in the region is uneven, and the development status and potential of the regional logistics industry are uneven. In Figure 11(b), the unbalanced number of clusters in the supply scale and the construction of transportation facilities indicates that the logistics in each region is not balanced with the development of the economy, and the trend of modernization and diversification of logistics activities is uneven. To sum up, the spatial nonequilibrium and steady state of logistics development in the three major regions of Jiangsu province are different, but the nonequilibrium trend is basically the same. is will inevitably become the core restricting factor for the coordinated development of Jiangsu's overall logistics. e core element of regional logistics development is the central nervous system of the entire regional logistics operation and is an important force to promote the growth of regional logistics. e imbalance of logistics productivity in various regions of Jiangsu will inevitably lead to the unbalanced situation of regional logistics space in Jiangsu, which affects the process of coordinated and healthy development of regional logistics in Jiangsu. e spatial disequilibrium of regional logistics is the result of the joint action of various objective factors, which have certain historical rationality and an immutable objective side. Regional logistics development should not only respect the objective fact of unbalanced development but also combine with the development law of regional logistics to build an innovative operation mechanism that can promote the unbalanced and coordinated development of regional logistics space. With the coordinated development of Jiangsu's regional economy, the top priority of Jiangsu's regional logistics development at this stage is to speed up the rationalization of the relationship between the government and the logistics market, optimize the rational distribution and layout of regional logistics productivity, and promote the quality change and efficiency of regional logistics development. Changes and power changes activate logistics productivity and better integrate into the integrated development of the Yangtze River Delta region. Additionally, it is a long-term and arduous task to explore and promote the unbalanced and coordinated development of regional logistics in Jiangsu. It is also a very challenging yet innovative practice. Conclusion At present, the economic development of Jiangsu province is still in a period of important strategic opportunities, while the opportunities and challenges faced by the development of the logistics industry in Jiangsu have new changes. ere are still some problems in the development scale and development level of the logistics industry in Jiangsu due to the imbalance of regional economic development, resource status, development location, and logistics distribution. Dedicated lines dominate the southern Jiangsu, the northern Jiangsu is dominated by stowage, and logistics parking lots are developed in the southern Jiangsu. e regional and unbalanced characteristics of logistics development in the three major regions are obvious. Firstly, the related theories of regional logistics and spatial nonequilibrium situation are deeply excavated to evaluate the spatial nonequilibrium situation of regional logistics in Jiangsu. e classification of regional logistics and spatial nonequilibrium and the levels involved are further based on the current development status of Jiangsu logistics. e main reasons for the spatial nonequilibrium situation of Jiangsu regional logistics are locked. Secondly, the basic principles, application scope, and application methods of Bagging and Boosting algorithms are, respectively, studied. en, the Jiangsu logistics space nonequilibrium situation evaluation model is established based on the Bagging and Boosting algorithms. e input data is sorted and summarized. Finally, by analyzing and summarizing the results obtained from the experiments of the evaluation model, the core issues of the unbalanced situation of the regional logistics in Jiangsu and the factors that affect the coordinated and healthy development of regional logistics are obtained. e uneven development trend of regional logistics space in Jiangsu has been comprehensively evaluated. e inherent root causes and effects of regional logistics space disequilibrium are of great significance, which can promote the coordination, coordination, and common development of the Jiangsu logistics regions. e research indicators selected in this paper may need to be further supplemented, and the selection method of indicators needs to be further improved. e basis for the selection of indicators needs to be further clarified to have a deeper understanding and better grasp of the complexity and laws of the unbalanced situation of regional logistics space. It needs to be expanded and deepened in future research. e problem of unbalanced development of regional logistics is analyzed and recognized from multiple perspectives. e nonequilibrium concept system of regional logistics development is further constructed. is system can enrich and develop the theoretical framework of non-equilibrium research. Additionally, new spatial analysis technologies and the integration of more spatial data processing methods have been introduced and have made breakthroughs. is is more inclined to applied research, enhances the explanatory power of the uneven development of regional logistics space, and supports promoting the healthy, coordinated, and sustainable development of the entire regional logistics industry. Data Availability e data used to support the findings of this study are available from the corresponding author upon request. Conflicts of Interest e authors declare no conflicts of interest.
7,351.4
2022-06-14T00:00:00.000
[ "Economics", "Business", "Engineering" ]
Empirically characteristic analysis of chaotic PID controlling particle swarm optimization Since chaos systems generally have the intrinsic properties of sensitivity to initial conditions, topological mixing and density of periodic orbits, they may tactfully use the chaotic ergodic orbits to achieve the global optimum or their better approximation to given cost functions with high probability. During the past decade, they have increasingly received much attention from academic community and industry society throughout the world. To improve the performance of particle swarm optimization (PSO), we herein propose a chaotic proportional integral derivative (PID) controlling PSO algorithm by the hybridization of chaotic logistic dynamics and hierarchical inertia weight. The hierarchical inertia weight coefficients are determined in accordance with the present fitness values of the local best positions so as to adaptively expand the particles’ search space. Moreover, the chaotic logistic map is not only used in the substitution of the two random parameters affecting the convergence behavior, but also used in the chaotic local search for the global best position so as to easily avoid the particles’ premature behaviors via the whole search space. Thereafter, the convergent analysis of chaotic PID controlling PSO is under deep investigation. Empirical simulation results demonstrate that compared with other several chaotic PSO algorithms like chaotic PSO with the logistic map, chaotic PSO with the tent map and chaotic catfish PSO with the logistic map, chaotic PID controlling PSO exhibits much better search efficiency and quality when solving the optimization problems. Additionally, the parameter estimation of a nonlinear dynamic system also further clarifies its superiority to chaotic catfish PSO, genetic algorithm (GA) and PSO. Introduction The emergence of chaotic systems was initially described by Lorenz [1] and by Hénon [2]. The two famous chaotic attractors bearing their names are the cornerstone of chaos theory in modern literatures. Chaos can be described as a deterministic behaviorial characteristic of bounded nonlinear systems. Chaotic systems generally exhibit the following properties: sensitive to a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 initial conditions, topologically mixing, and dense in periodic orbits. Although they usually appear to be stochastic, they are conditionally deterministic and periodically ergodic through the whole search space. These distinct merits have caused great concerns from many scientific disciplines including geology, mathematics, microbiology, biology, computer science, economics, engineering, finance, algorithmic trading, meteorology, philosophy, physics, politics, population dynamics, psychology, and robotics. Up to now, chaos theory has become a very active area of research and its applicability is also vastly broadened. Scholars and practitioners all over the world make full use of it to investigate the control, synchronization, prediction and optimization problems of nonlinear dynamic systems by following chaotic ergodic orbits. As is known, finding out optimal solutions is a hard and significant task in a good many nonlinear dynamic systems. Optimization problem solving is chiefly concerned about the quantitative and qualitative study of optima to pursue and the methods of finding out them. The emergent optimization techniques are usually divided into three distinct classes: natural phenomena, physical phenomena and mathematical computational phenomena. They often tend to exploit evolutionary heuristics to solve the solutions. In addition, being deterministic and ergodic, chaos is combined with evolutionary heuristics and acts as a prominent role in solving optimization problems. There exist two chaotic ways to be applied to optimization areas [3][4][5]. The first way is to introduce chaos into a unified ensemble like neural network. The harmonic combination of neurons and non-equilibrium dynamics with diverse concomitant attractors can completely use chaotic ergodic orbits to pursue the global optimum. The another way is to closely combine evolutionary variables with chaotic attractors and edges. Their generic philosophy is as follows: mapping the relevant variables or ensemble in the problems from the chaotic space to the search space, and then utilizing chaotic ergodic orbits to search the optima instead of using random orbits. Meanwhile, in order to obtain the objective, sensitivity to initial conditions has to be taken into consideration seriously. More inspiringly, great progresses pertaining to chaotic optimization heuristics have been made in the past decade [6][7][8][9][10][11][12][13][14]. Simultaneously, some recent remarkable work on the study of PSO is worth noting [15][16][17][18]. Liu et al. proposed a hybrid particle swarm optimization algorithm by incorporating logistic chaos and adaptive inertia weight factor into PSO, which reasonably combines the populationbased evolutionary PSO search ability with chaotic search behavior [6]. In [7], Cai et al. presented a chaotic PSO method based on the tent equation to solve economic dispatch problems with generator constraints. Compared with the traditional PSO method, the chaotic PSO method has good convergence property accompanied by the lower generation costs and can result in great economic effect. Hong elucidated the feasibility of applying a chaotic PSO algorithm to choose the suitable parameter combination for a support vector regression model. The optimized model provides the theoretical exploration of the electric load forecasting support system [8]. In [9], Wang and Liu proposed a logistic chaotic PSO approach to generate the optimal or near-optimal assembly sequences of products. The proposed method is validated with an illustrative example and the results are compared with those obtained using the traditional PSO algorithm under the same assembly process constraints. Chuang et al. presented accelerated chaotic PSO with an acceleration strategy and used it to search through arbitrary data sets for appropriate cluster centers. Results of the robust performance from accelerated chaotic PSO indicate that this method an ideal alternative for solving data clustering problem [10]. In [11], Chuang et al. proposed chaotic catfish PSO. Statistical analysis of the experimental results indicate that the performance of chaotic catfish PSO is better than the performance of PSO, chaotic PSO, catfish PSO. In [12], Wang et al. developed an approach for grey forecasting model, which is particularly suitable for small sample forecasting, based on chaotic PSO and optimal input subset. The numerical simulation result of financial revenue demonstrates that developed algorithm provides very remarkable results compared to traditional grey forecasting model for small dataset forecasting. More recently, Gandomi et al. introduced chaotic accelerated PSO and applied it to solving three engineering problems. The results show that chaotic accelerated PSO with an appropriate chaotic map can clearly outperform standard accelerated PSO, with very good performance in comparison with other algorithms and in application to a complex problem [13]. In [14], Xu et al. presented a novel robust hybrid PSO based on piecewise linear chaotic map and sequential quadratic programming. This novel algorithm makes the best of ergodicity of the piecewise linear chaotic map to help PSO with the global search while employing the sequential quadratic programming to accelerate the local search. Qin et al. presented an improved PSO algorithm with an interswarm interactive learning strategy by overcoming the drawbacks of the canonical PSO algorithm's learning strategy. The algorithm is inspired by the phenomenon in human society that the interactive learning behavior takes place among different groups [15]. Zhang et al. proposed a novel vector coevolving particle swarm optimization algorithm [16]. Du et al. presented a heterogeneous strategy PSO, in which a proportion of particles adopts a fully informed strategy to enhance the converging speed while the rest is singly informed to maintain the diversity [17]. Niu et al. proposed a new variant of PSO, named symbiosis-based alternative learning multiswarm particle swarm optimization [18]. Since PID controllers can successfully adopt a weighted PID term sum to determine a new control variable and further minimize the error over time between a desired setpoint variable and a measured process variable, PID controlling law has been widely used in various industry control systems. Since Lu et al. proposed a PID controlling PSO algorithm and successfully applied it to estimating the parameters of vertical takeoff and landing aircrafts [19,20]. Therefore, in this paper, in order to improve the performance of the algorithm and broaden its more applications, we propose a novel hybrid PSO algorithm which we call chaotic PID controlling PSO. The hierarchical inertia weight coefficients, PID controller, and chaotic logistic map are simultaneously incorporated into PSO to improve the PSO nonlinear dynamics. The hierarchical inertia weight coefficients are determined in accordance with the present fitness values of the local best positions. The chaotic logistic map is used in both the substitution of the two random parameters and the chaotic local search of for the global best position. Successively, the convergent analysis of chaotic PID controlling PSO is deeply investigated. For the purpose of performance evaluation of chaotic PID controlling PSO, empirical experiments are conducted on some complex multimodal functions. Then it is further used in estimating the parameters of a nonlinear dynamic system in engineering. These simulation results prove the better effectiveness and efficiency of chaotic PID controlling PSO when solving the optimization problems, compared with other chaotic PSO algorithms and meta-heuristics such as chaotic PSO with the logistic map [6], chaotic PSO with the tent map [7], chaotic PSO [10], chaotic catfish PSO [11], pure random search (PRS) [21], GA [22], multistart (MS) [23], simulated annealing (SA) [23], taboo search (TS) [24], standard PSO (SPSO) [25,26], chaotic simulated annealing (CSA) [27] and center PSO (CenterPSO) [28]. The remainder of the paper is organized as follows. Section 2 depicts the dynamical model, hierarchical inertia weight, chaotic local search for the global best position, whole procedure and convergent analysis of chaotic PID controlling PSO. Section 3 presents the experimental study of conducting chaotic PID controlling PSO on some complex multimodal functions together with other chaotic PSO in [6,7,11]. Section 4 depicts the application of parameter estimation of a nonlinear dynamic system using chaotic PID controlling PSO. Section 5 gives the conclusions and future work. Representation of chaotic PID controlling PSO SPSO is a stochastic population-based algorithm which is modeled on the behaviors of insects swarming, animals herding, birds flocking, and fish schooling where these swarms search for food in a collaborative manner, and it was originally introduced by Kennedy and Eberhart in 1995 [25,26]. It is usually used for the optimization of continuous nonlinear systems. Since SPSO uses a simple swarm emulating mechanism to guide the particles to search for globally optimal solutions and implements easily, it has succeed in solving many real-world optimization problems. In order to improve the performance of the SPSO algorithm and achieve the specific goals of accelerating convergence speed and avoiding local optima, we herein bring forward a novel PSO approach called CPIDSO. In this part, we discuss the dynamical model, hierarchical inertia weight, chaotic local search for the global best position, and give a full description of the procedure of chaotic PID controlling PSO in turn. Dynamical model of chaotic PID controlling PSO. SPSO is a kind of typically stochastic standard algorithm to search for the best solution by simulating the movement of the flocking of birds or fish. It works by initializing a flock of birds or fish randomly over the searching space, where each bird or fish is called a particle. These particles fly with certain velocities and find the global best position after some generations. At each generation, they are dependent on their own momentum and the influence of their own local and global best positions x lbest and x gbest to adjust their own next velocity v and position x to move in turn. SPSO is clearly depicted as follows , where ω pso , c 1 and c 2 denote the inertia weight coefficient, cognitive coefficient and social coefficient, respectively, and rand 1 , rand 2 are both random values between 0 and 1. Besides, v is clamped to a given range [-v max , + v max ]. Supposing ϕ 1 = c 1 Á rand 1 , ϕ 2 = c 2 Á rand 2 , ϕ = c 1 Á rand 1 +c 2 Á rand 2 and y ¼ 0 2 0 , after introducing a proper PID controller into Eqs (1) and (2) [19,20], we may obtain the following Eq (3). Please note that t in the Eq (3) denotes the present iterative generation and are not the absolute time metric. Actually, the PSO system is a continuous system. Therefore, we have used the PID controlling model in the context. MaxT is the maximum generation. If the random parameters rand 1 and rand 2 in Eq (1) of SPSO are chaotic, they can ensure the optimal ergodicity throughout the search space. Furthermore, there are no fixed points, periodic orbits, or quasi-periodic orbits in the behaviors of the chaotic systems. Therefore, they are necessarily substituted by the two sequences Cr (t) and (1 − Cr (t) ) generated via the following logistic map Eq (4) Cr ðtþ1Þ ¼ m Á Cr ðtÞ Á ð1 À Cr ðtÞ Þ; i ¼ 0; 1; 2; Á Á Á ; n: ð4Þ , where Cr (0) is generated randomly for each independent run, but it is not equal to {0, 0.25, 0.5, 0.75, 1}, and μ is equal to 4. Obviously, Cr (t) is distributed in the interval (0, 1.0). So the driving parameter μ of the logistic map controls the behavior of Cr t as the iteration number t goes to infinity. So ϕ and θ are changed into the following Eqs (5) and (6). Concerning the inertia weight coefficient, we adopt the following hierarchical Eq (7) [6] w pso ¼ , where w Pso max and w Pso min represent the maximum and minimum of w pso , f is the current objective value of the particle, f avg and f min are the average and minimum objective values of all particles, respectively. In addition, the cognitive coefficient is supposed to decrease linearly from 2 to 0 while the social coefficient is supposed to increase linearly from 0 to 2. Consequently, our proposed chaotic PID controlling PSO is comprised of Eqs (2) and (3). Chaotic local search of chaotic PID controlling PSO. In chaotic PID controlling PSO, we introduce the following logistic Eq (8) in the process of the chaotic local search for the global best position x gbest to improve the mutation mechanism , where Cx gbest,i (t) denotes the ith chaotic variable, and μ is equal to 4. Obviously, Cx gbest,i (t) is distributed in the interval (0, 1.0) under the conditions that the initial Cx gbest,i (0) 2 (0, 1) and that Cx gbest,i (0) = 2 {0.25, 0.5, 0.75}. In general, the chaotic variable has special properties of ergodicity, pseudo-randomness and irregularity. Since a minute difference in the initial value of the chaotic variable would result in a considerable difference in its long time behavior, the chaotic variable can travel ergodically over the whole search space. Therefore, these merits of the chaotic variable can help the global optimum keep away from the local optima. The procedure of the chaotic local search for the global best position based on the abovementioned logistic Eq (8) can be illustrated as follows: Step 1: Set t = 0 and map the decision variables x gbest,i (t) i = 1, 2, . . ., n among the intervals (x min, i , x max, i ) to the chaotic variables Cx gbest,i (t) located in the intervals (0, 1) using the following Eq (9). Step 2: Determine the chaotic variables Cx gbest,i (t + 1) for the next iteration using the logistic . Step 4: Evaluate the new solution with the decision variables x gbest,i (t + 1) . Step 5: If the new solution is better than the predefined criterion or the predefined maximum iteration reaches output the new solution as the result of the chaotic local search for the global position; otherwise, let t = t + 1 and go back to Step 2. Procedure of chaotic PID controlling PSO. Consequently, based on the aforementioned contexts, our proposed chaotic PID controlling PSO can be depicted below in detail. Step 1: Initialize parameters including the number PN of particles, dimensional size D of each particle, maximum generation number MaxT, initial chaotic logistic values Cr (0) and Cx (0) , initial chaotic tent value Cx1 (0) , initial position x and velocity v of each particle, inertia weight coefficient w pso , and cognitive coefficient c 1 , social coefficient c 2 . Calculate the initial fitness of each particle, and set the initial local best position x lbest and global best position x gbest . Step 2: Calculate the three parameters k p , k i and k d of the PID controller. Then in terms of Eqs (2) and (3), calculate the next velocity v(t) and position x(t) of each particle. Next, calculate the fitness of each particle, set the local best position x lbest and the global best position x gbest . Step 3: If the fitness of the global best position is the same value seven times, then implement the chaotic local search for the global best position, and update the global best position using the result of Eq 10. Step 4: Observe if the global best fitness(x gbest ) meets the given stopping threshold or not, or observe if the maximum generation number MaxT reaches or not. If not, go back to Step 2. Step 5: Otherwise, the operation can be terminated. Finally, output the global best position x gbest , and its corresponding global best fitness as well as convergent generation number. The pseudo-code for chaotic PID controlling PSO is presented below in Algorithm 1. Convergent analysis of chaotic PID controlling PSO In this part, the convergence of chaotic PID controlling PSO is analytically studied. Theorem 1. In chaotic PID controlling PSO where its recurrence equations are Eqs (2) and (2) and (3), we yield Eq (11). This recurrence equation is approximately constant coefficient nonhomogeneous linear, and the secular equation of the corresponding homogeneous recurrence equation is as follows: , the latent roots of the above secular equation are as follows: According to the relations of the recurrence Eq (11) and its special solution, we can solve the special solution below. According to the relations of the recurrence Eq (11), its general solution, special solution, and latent roots, we can obtain the general solution of the recurrence Eq (11) as follows: Therefore, the above Theorem 1. is existent. Experimental study In this part, we conduct a detailed experimental study to evaluate the performance of chaotic PID controlling PSO. The experiments include the description of the experimental setup, convergence, robustness, computational cost of chaotic PID controlling PSO as well as experimental discussion. Selected chaotic PSO algorithms and parameter setting. In order to illustrate, compare and analyze the effectiveness and performance of chaotic PID controlling PSO, we select four state-of-the-art chaotic PSO variants including the proposed chaotic PID controlling PSO to conduct the experiments on the ten analytic test problems with 5, 15 and 100 dimensions. These chaotic PSO variants are listed below and their settings of important parameters are summarized in Table 1. • Chaotic PSO with the logistic map (CPSO-1) [6]; • Chaotic PSO with the tent map (CPSO-2) [7]; • Chaotic catfish PSO with the logistic map (CPSO-3) [11]; • Chaotic PID controlling PSO (CPIDSO). Name Inertia Weight Acceleration Coefficients and Others CP ID SO w pso is decided by Eq (7) where w pso min = 0.4, w pso max = 0.9. Benchmark functions. Ten representative benchmark functions are used to test the selected chaotic PSO algorithms [29]. They are shown below in Table 2. Since chaos attempts to help evolutionary algorithms avoid getting stuck in local optima, these benchmark functions is mainly considered to be multimodal problems. It is apparent that most of these test functions are the hybrid composites of the typical multimodal functions like Ackley, Rosenbrock, Griewank, Rastrigin, Schwefel and Weierstrass functions so that their properties become more complicated and much closer to the environments in the real world. Thus such are beneficial to the reasonable verification of performance evaluation of chaotic PID controlling PSO. Three dimensional maps for two dimensional test functions f 3 , f 5 and f 6 in Table 2 are shown in Fig 1. Convergence of chaotic PID controlling PSO In order to validate the convergent performance of chaotic PID controlling PSO, chaotic PID controlling PSO together with other three chaotic PSO algorithms are conducted on the benchmark test functions in Table 2 Optimum on Bounds Shifted Rotated Weierstrass Function Schwefel's Problem 2.13 Griewank's Function : Table 3 presents the global minimum means and variances of the 20 runs of the four chaotic PSO algorithms on the ten test functions with their dimensions 5 in Table 2. Table 4 presents the global minimum means and variances of the 20 runs of the four chaotic PSO algorithms on the ten test functions with their dimensions 15 in Table 2. Table 5 presents the global minimum means and variances of the 20 runs of the four chaotic PSO algorithms on the ten test functions with their dimensions 100 in Table 2. The best results among the four chaotic PSO algorithms are shown in bold in Tables 3-5. Fig 2 presents the convergence characteristics in terms of the best fitness value of the median run of diverse chaotic PSO algorithms for each test function with its dimension 5. Fig 3 presents the convergence characteristics in terms of the best fitness value of the median run of diverse chaotic PSO algorithms for each test function with its dimension 15. Fig 4 presents the convergence characteristics in terms of the best fitness value of the median run of diverse chaotic PSO algorithms for each test function with its dimension 100. The results of the proposed chaotic PID controlling PSO are depicted by bold solid lines in Figs 2, 3 and 4. Note that the function fitness here is defined as the absolute value of given global minimum in Table 2 minus computed global minimum. And the approximate results of Y axises in Figs 2, 3 and 4 are logarithmic. From the results in Table 3, we clearly observe that for the multimodal problems in Table 2, chaotic PID controlling PSO achieves best results on most of the test functions f 1 −f 4 and f 6 −f 7 while it does not exhibit the best performance on the test function f 5 . In addition, chaotic PID controlling PSO performs better than CPSO-1 and CPSO-2 on the function f 5 , but CPSO-3 achieves the best result on the test function f 5 . It is worth noting that compared with CPSO-1 and CPSO-2, CPSO-3 yields comparatively better results on the test functions f 1 , f 6 , f 8 , f 9 and f 10 . Furthermore, CPSO-1 performs better than CPSO-2 and CPSO-3 on the test functions f 3 and f 7 whilst CPSO-2 does better than CPSO-1 and CPSO-3 on the test functions f 2 and f 4 . Comparing the results in Table 3 with the graphs in Fig 2, we find out that CPSO-1 and CPSO-2 perform rather worse on the test functions f 6 , f 9 and f 10 and CPSO-3 does worst on the test functions f 2 , f 3 , f 4 and f 7 . Such chaotic PID controlling PSO's results demonstrate its better effectiveness and efficiency on solving most multimodal problems. When the dimensional size increases from 5 to 15, the experiments similar to those conducted on the 5-D problems are repeated on the 15-D problems, and the results and graphs Empirically characteristic analysis of chaotic PID controlling particle swarm optimization are presented in Table 4 and Table 4 and the graphs in Fig 3, we observe that though the results of the diverse chaotic PSO algorithms for the 15-D problems in Table 2 are not as comparatively good as those for the 10-D problems, the diverse chaotic PSO algorithms for the 15-D problems have many similarities as those for the 5-D problems. chaotic PID controlling PSO still exhibits best results on most of the test functions f 1 −f 4 and f 6 −f 7 except f 5 while CPSO-3 achieves the best result on the test function f 5 . Besides, CPSO-1 and CPSO-2 perform worse on the test functions f 1 , f 6 , f 9 and f 10 and CPSO-3 still achieves the worst results on the test functions f 3 and f 7 . However, despite these results, CPSO-2 and CPSO-3 become robust to conduct on the complex problems in Table 2 as the dimensional size increases from 5 to 15. From the graphs in Fig 3, it is obvious that chaotic PID controlling PSO shows much better results than do other CPSO algorithms on most complex multimodal problems since the time varying PID controller, chaotic random parameters and chaotic local search for the global best position have effectively improved the evolutionary dynamics of particles at the same time. Empirically characteristic analysis of chaotic PID controlling particle swarm optimization From the graphs in Fig 4 and the results in Table 5, it can be observed that when the high 100-D problems are solved, diverse chaotic PSO algorithms sharply degenerate. Although chaotic PID controlling PSO still achieves the best results on most of the test functions, its search capability obviously get weaker than before so that it suffers from local optimal and degeneracy problem, especially on the test function f 6 . There are several important causes to merit attention. Besides the usual search space expansion, the lacks of more effective social learning, hierarchical inertia weight, chaotic local search strategies are non-negligible ones. Such causes directly result in the deterioration of the diversity of swarm. Table 6 presents the fixed accuracy level of the selected analytic test functions in Table 2 for performance testing of diverse chaotic PSO algorithms. A successful run denotes the run during which the algorithm achieves the fixed accuracy level within the Maximum FEs for a particular dimension. Based on successful runs, success rate (Suc. Rate) and success performance Empirically characteristic analysis of chaotic PID controlling particle swarm optimization (Suc. Perf.) are defined below [30]. Suc: Rate ¼ Successful runs Total runs ð17Þ Suc: Perf : ¼ meanðFEs for successful runsÞ Á Total runs Successful runs ð18Þ Table 7 presents the success rates and success performances of diverse chaotic PSO algorithms for the 5-D test functions in Table 2. The best results among the chaotic PSO algorithms are shown in bold in Table 6. From the results in Table 7, it can be seen that chaotic PID controlling PSO achieves the best success rates and success performances when solving the test functions f 1 − f 4 and f 6 while CPSO-3 does best on the test function f 5 and CPSO-1 does best on the test functions f 7 and f 8 . All the Chaotic PSO algorithms do not show good results on the test functions f 9 and f 10 . However, from the results in Tables 3, 6 and 7, one may conclude that Empirically characteristic analysis of chaotic PID controlling particle swarm optimization compared with other chaotic PSO algorithms, chaotic PID controlling PSO search the global optima with high successful probability and is comparably more effective and reliable for solving most complex problems in Table 2. Table 8. Computational cost of chaotic PID controlling PSO From the computational cost results in Table 8, CPSO-3 consumes least on the test functions f 3 , f 4 , f 5 , f 10 whilst CPSO-2 does least on the test functions f 1 , f 2 , f 8 , f 9 , and CPSO-1 does least on the test functions f 6 , f 7 . The computational cost of chaotic PID controlling PSO is Table 6. Fixed accuracy level of the selected analytic test functions in Table 2. Function Accuracy Function Accuracy Empirically characteristic analysis of chaotic PID controlling particle swarm optimization more than other chaotic PSO algorithms on most of the test functions. Such illustrates that chaotic PID controlling PSO is required to learn from CPSO-3 and CPSO-2 and further refine the complex computational process so as to improve its efficiency. Experimental discussion In [6], CPSO-1 is considered to outperform other meta-heuristics such as PRS, MS, SA, TS, CSA and GA when solving complex optimization problems. Furthermore, in [11], CPSO-3 has better search ability than catfish PSO, SPSO and CenterPSO when searching for the global optima. Therefore, chaotic PID controlling PSO shows better search efficiency and quality, compared with these algorithms. The experimental results have proved the fact. The reason why chaotic PID controlling PSO yields better results for solving complex optimization problems is that the time varying PID controller, chaotic random parameters and chaotic local search for the global best position have effectively improved the evolutionary dynamics of particles and enhanced the particles' local and global search exploration and exploitation abilities. However, these hybrid evolutionary strategies have to be further updated for high dimensional complex multimodal problems since the diversity of swarm becomes promptly deteriorated. Application in parameter estimation of a nonlinear dynamic system In this part, we conduct a detailed application to identifying the parameters of a nonlinear dynamic system. The application includes the description of the nonlinear dynamic system and experimental setup, parameter estimation and experimental results as well as model validation. Description of the nonlinear dynamic system and experimental setup In order to clarify the effectiveness and efficiency of chaotic PID controlling PSO, we apply chaotic PID controlling PSO, CPSO-3, GA and PSO to identifying the parameters of a nonlinear dynamic system. The nonlinear dynamic system is described as follow: In the nonlinear dynamic model, the identified parameters K, T 1 , T 2 and T 3 Empirically characteristic analysis of chaotic PID controlling particle swarm optimization comparison to the experimental results, these real parameters are presented, namely K = 2, T 1 = 1, T 2 = 20 and T 3 = 0.8. During the course of the parameter estimation, the identification criterion function is generally defined below , where N is the number of testing samples, y i is the output value of the ith testing sample, and y i is the estimated prediction value of the ith testing sample. These testing samples are acquired when a pseudo-random binary sequence is regarded as the input signal. The pseudo-random binary sequence and the testing samples are shown in Fig 5 below. For CPSO-3, GA, PSO and chaotic PID controlling PSO, the population size PN is set at 80, and the maximum generation number MaxT is set at 50. The settings of important parameters for GA and PSO are summarized in Table 9. Parameter estimation and experimental results We wish to test CPSO-3, GA, PSO and chaotic PID controlling PSO on the above specific criterion fitness function with the 4-D parameters K, T 1 , T 2 and T 3 so as to estimate these Empirically characteristic analysis of chaotic PID controlling particle swarm optimization parameters. To ensure the validation and accuracy of the experimental measurements, all evolutionary optimization algorithms are run 10 times on the fitness function and their final results are counted in the mean best fitness. The mean values, standard deviation of the results, and the best values are presented in Table 10 below. And in order to determine whether the results obtained by chaotic PID controlling PSO are statistically different from the results generated by other evolutionary optimization algorithms, the nonparametric Wilcoxon rank sum tests are conducted between the chaotic PID controlling PSO's result and the result achieved by other evolutionary optimization algorithms for the fitness function. Table 10 presents the means and variances of the 10 runs of the four evolutionary optimization algorithms on the above specific criterion fitness function with its dimension 4. The best results among the evolutionary optimization algorithms are shown in bold in Table 10. Table 11 presents the average computational cost time (seconds) of diverse chaotic PSO algorithms for the 4-D identification problem. From the results in Table 10, we clearly notice that CPSO-3, PSO and chaotic PID controlling PSO outperform GA in the course of identifying the parameters K, T 1 , T 2 and T 3 . In addition, chaotic PID controlling PSO performs best for all the parameter estimation whilst CPSO-3 achieves better estimated results than PSO. From the graphs in Fig 6, one may observe that the fact that the mean fitness of GA is the worst one of all is evident, which reveals GA's inferiority to other three evolutionary algorithms for the whole parameter identification. On the other hand, it is worth noting that compared to PSO, chaotic PID controlling PSO has improved a lot in spite of more computational time consumption. Model validation To verify the estimation results of the four evolutionary algorithms, their estimated parameters were used to the dynamic computation of the above nonlinear system. The concrete output Empirically characteristic analysis of chaotic PID controlling particle swarm optimization results of the verification experiments are presented in Fig 7 and Table 12. Fig 7 presents the output results and their errors of diverse evolutionary optimization algorithms for the 4-D identification problem. Table 12 presents absolute accumulated errors by diverse evolutionary optimization algorithms for the 4-D identification problem. As seen in Fig 7, there exist different absolute errors among these estimated output results. One may find that the absolute errors by chaotic PID controlling PSO and CPSO-3 are smaller while the ones by GA are biggest. In addition, PSO produces more accurate estimation results than GA. As given in Table 12, it is obvious that chaotic PID controlling PSO yields the best Empirically characteristic analysis of chaotic PID controlling particle swarm optimization estimation results while GA performs comparatively worse. The estimation results by PSO are worse than those by CPSO-3, but are better than those by GA. Despite of these, all the evolutionary algorithms can be utilized to estimate the parameters of nonlinear dynamic systems. Conclusions and future work We present a chaotic PID controlling PSO variant, where we attempt to use the combination of a PID controller, chaotic logistic dynamics and hierarchical inertia weight to improve the performance of SPSO. Chaotic PID controlling PSO together with with other several chaotic PSO algorithms, is conducted on some multimodal functions. Successively, it is also used in the parameter identification of a given nonlinear dynamic system. The experimental results indicate chaotic PID controlling PSO enhances the diversity of swarm, and has better convergence efficiency, compared with other several chaotic PSO algorithms and meta-heuristics. Empirically characteristic analysis of chaotic PID controlling particle swarm optimization Furthermore, chaotic PID controlling PSO also outperforms chaotic catfish PSO, GA and PSO for the parameter identification of the nonlinear dynamic system. Future work will further the performances of hybrid evolutionary strategies of PID controllers, hierarchical inertia weight and chaotic dynamics for high dimensional complex multimodal problems. Besides, the refining of the multifarious computation is needed. Moreover, we will apply the proposed approach to other practical engineering applications.
8,113
2017-05-04T00:00:00.000
[ "Computer Science" ]
Solutions to the Schrödinger Equation with Inversely Quadratic Yukawa Plus Inversely Quadratic Hellmann Potential Using Nikiforov-Uvarov Method The solutions to the Schrödinger equation with inversely quadratic Yukawa and inversely quadratic Hellmann (IQYIQH) potential for any angular momentum quantum number l have been presented using the Nikiforov-Uvarov method. The bound state energy eigenvalues and the corresponding unnormalized eigenfunctions are obtained in terms of the Laguerre polynomials. The NU method is related to the solutions in terms of generalized Jacobi polynomials. In the NU method, the Schrödinger equation is reduced to a generalized equation of hypergeometric type using the coordinate transformation s = s(r). The equation then yields a form whose polynomial solutions are given by the well-known Rodrigues relation. With the introduction of the IQYIQH potential into the Schrödinger equation, the resultant equation is further transformed in such a way that certain polynomials with four different possible forms are obtained. Out of these forms, only one form is suitable for use in obtaining the energy eigenvalues and the corresponding eigenfunctions of the Schrödinger equation. Introduction The bound state solutions to the Schrödinger equation (SE) are only possible for some potentials of physical interest [1][2][3][4][5]. Quite recently, several authors have tried to solve the problem of obtaining exact or approximate solutions to the Schrödinger equation for a number of special potentials [6][7][8][9][10]. Some of these potentials are known to play very important roles in many fields of physics such as molecular physics, solid state, and chemical physics [8]. The purpose of the present work is to present the solution to the Schrödinger equation with the inversely quadratic Yukawa potential [11] plus inversely quadratic Hellmann potential [12] of the forms The sum of these potentials can be written as where represents the internuclear distance, and are the strengths of the Coulomb and Yukawa potentials, respectively, is the screening parameter, and 0 is the dissociation energy. Equation (2) is then amenable to Nikiforov-Uvarov method. Ita [13] has solved the Schrödinger equation for the Hellman potential and obtained the energy eigenvalues and their corresponding wave functions using expansion method and Nikiforov-Uvarov method. Also, Hamzavi and Rajabi [14] have used the parametric Nikiforov-Uvarov method to obtain tensor coupling and relativistic spin and pseudospin symmetries of the Dirac equation with the Hellmann potential. Kocak et al. [15] solved the Schrödinger equation with the Hellmann potential using asymptotic iteration method and obtained energy eigenvalues and the wave functions. However, not much has been achieved in the area of solving the radial Schrödinger equation for any angular momentum quantum number, , with IQYIQH potential using Nikiforov-Uvarov method in the literature. Overview of the Nikiforov-Uvarov Method The Nikiforov-Uvarov (NU) method is based on the solutions to a generalized second-order linear differential equation with special orthogonal functions [16]. The Schrödinger equation of the type: could be solved by this method. This can be done by transforming (3) into an equation of hypergeometric type with appropriate coordinate transformation = ( ) to get To find the exact solution to (4), we write ( ) as Substitution of (5) into (4) yields (6) of hypergeometric type In (5), the wave function ( ) is defined as the logarithmic derivative [17] ( ) with ( ) being at most first-order polynomials. Also, the hypergeometric-type functions in (6) for a fixed integer are given by the Rodrigue relation as where is the normalization constant and the weight function ( ) must satisfy the condition with ( ) = ( ) + 2 ( ) . In order to accomplish the condition imposed on the weight function ( ) it is necessary that the polynomial ( ) be equal to zero at some points of an interval ( , ) and its derivative at this interval at ( ) > 0 will be negative [17]. That is, The function ( ) and the parameter required for the NU method are then defined as [17] = + ( ) . The Schrödinger Equation In spherical coordinate, the Schrödinger equation with the potential ( ) is given as [19] − Using the common ansatz for the wave function in (9) we get the following set of equations: where = ( + 1) and 2 are the separation constants. ( , ) = Θ ( )Φ ( ) is the solution to (18) and (19) and their solutions are well known as spherical harmonic functions [19]. Solutions to the Radial Equation Equation (17) is the radial part of the Schrödinger equation which we are interested in solving. Equation (17) together with the potential in (2) and with the transformation = 2 yields the following equation: where the radial wave function is ( ) and Equation (14) is then compared with (4) and the following expressions are obtained: We then obtain the function by substituting (22) into (12): According to the NU method, the quadratic form under the square-root sign of (23) must be solved by setting the discriminant of this quadratic equation equal to zero; that is, = 2 − 4 = 0. This discriminant gives a new equation which can be solved for the constant to get the two roots as ± = ± √ (1 + 4 ). (24) Thus, we have − = − √ (1 + 4 ), When the two values of given in (25) are substituted into (23), the four possible forms of ( ) are obtained as One of the four values of the polynomial ( ) is just proper to obtain the bound state solution since given in (4) must have negative derivative. Therefore, the most suitable expression of ( ) is chosen as for − = − √ (1 + 4 ). We obtain ( ) = 1 + √1 + 4 − 2√ from (10) and the derivative of this expression would be negative; that is, ( ) = −2√ < 0. From (19) and (20) we obtain When we compare these expressions, = , we obtain the energy of the IQYIQH potential as Let us now calculate the radial wave function, ( ). Using and (7) and (9), the following expressions are obtained: Then from (8) one has is a normalization constant. The wave function ( ) can be obtained in terms of the generalized Laguerre polynomials as is the normalization constant. Discussion In summary, we have obtained the energy eigenvalues and the corresponding unnormalized wave function using the NU method for the Schrödinger equation with the inversely quadratic Yukawa plus inversely quadratic Hellmann potential. If we set the parameters = = 0 and = 2 , it is easy to show that (29) reduces to the bound state energy spectrum of a particle in the Coulomb potential; that is, = − 2 4 /2ℏ 2 2 , where = + + 1 is the principal quantum number. Similarly, if we set = = 0, (29) results in the bound state energy spectrum of a vibrating-rotating diatomic molecule subject to the inversely quadratic Yukawa potential as follows: (2 ) 2 /2ℏ 2 ( + (1/2) + √ ( + (1/2)) 2 − (2 ( ) /ℏ 2 )) 2 . Equation (34) is also similar to the one obtained in [12]. These show the accuracy of our calculations. Conclusion The bound state solutions to the Schrödinger equation have been obtained for the inversely quadratic Yukawa and inversely quadratic Hellmann potential. Special cases of the potential are also considered and their energy eigenvalues are obtained.
1,730.4
2013-12-09T00:00:00.000
[ "Physics" ]
Iris Identification Based on the Fusion of Multiple Methods Iris recognition occupies an important rank among the biometric types of approaches as a result of its accuracy and efficiency. The aim of this paper is to suggest a developed system for iris identification based on the fusion of scale invariant feature transforms (SIFT) along with local binary patterns of features extraction. Several steps have been applied. Firstly, any image type was converted to grayscale. Secondly, localization of the iris was achieved using circular Hough transform. Thirdly, the normalization to convert the polar value to Cartesian using Daugman’s rubber sheet models, followed by histogram equalization to enhance the iris region. Finally, the features were extracted by utilizing the scale invariant feature transformation and local binary pattern. Some sigma and threshold values were used for feature extraction, which achieved the highest rate of recognition. The programming was implemented by using MATLAB 2013. The matching was performed by applying the city block distance. The iris recognition system was built with the use of iris images for 30 individuals in the CASIA v4. 0 database. Every individual has 20 captures for left and right, with a total of 600 pictures. The main findings showed that the values of recognition rates in the proposed system are 98.67% for left eyes and 96.66% for right eyes, among thirty subjects. INTRODUCTION Iris recognition has gained more importance as a result of several applications, with high case consistency and highly impeccable recognition rates. Iris recognition is used in high-security environments, including boundary control at airports and harbors as well as laboratory contact control. Several papers have been published, with weaknesses and strengths. Rashad Et al. (2011) suggested an iris recognition and classification algorithm, using a framework built on local binary pattern and histogram properties. The algorithm was applied as a statistical method for features extraction, where a combined learning vector quantization classifier was used as a neural network classification tool. The identification rate for various iris datasets was 99.87 % as compared to different methods [1]. Gongping et al. (2012) proposed a new approach to recognize iris by invariant transformation feature scale. The experimental results showed that the invariant scale feature transformation algorithm, with enhancement and normalization, can especially improve recognition accuracy [2]. Harinder and Sunil (2016) suggested an approach for iris feature extraction using invariant scale feature transform, which is invariant in scale, somewhat invariant in rotation, and shows robustness in affinity distortion. The advantages of the proposed method are its accuracy and simplicity [3]. Rathgeb et al. (2018) proposed a method for iris recognition by scale invariant transform feature, representing a general purpose image descriptor, discriminative orientation based feature selection, and magnitude possibility distribution function. The weight assignment for the iris texture sub-regions showed an improved performance with equal error rates of 0.88%, and 0.9% for CASIAv3 and MMU, respectively [4]. Divya and Urmila (2016) used Daugman's method to determine the pupil and the iris borders. This work focuses more on an effective and accurate method for iris segmentation [5]. Humayan et al. (2019) proposed a technique that includes discrete wavelet transformation-principal component analysis to extract the optimization features for the iris. Our experimental estimate validates the successful implementation of the proposed method [6]. In this paper, we suggest a new approach built on SIFT as well as LBP, where SIFT produces a matrix of coefficients that has a variant size related to the size of the enrolled images. The proposed method induces an extensive acceleration, causing lower operation times which are equivalent to those of the conventional scheme. In addition, the proposed method proves to maintain an effective application of the biometric blend, achieving important performance increases in a difficult multi-algorithm blending with conventional scheme. Finally, SIFT proved to be more important as a result of its invariance to illumination, scale, noise, rotation… etc. MATERIALS AND METHODS Several approaches were proposed to handle iris recognition; some of them deal with normalization problems, while others deeply treat the recognition operation, as described below. Iris Localization and Separation Localization of iris is an important stage in the recognition scheme of human iris, with the aim of creating an exact allocation of iris borders. For the purpose of accuracy, this step will govern all the subsequent phases. Iris of human be located an annular part among Human iris is located at the annular part between the pupil (inner circle) and the sclera (outer circle), with both the outer and inner borders of the iris representing circles [7]. Finding the centers of the pupil and iris is an important stage in iris recognition. Normalization of Iris This process denotes the preparation of an iris image for the extraction of features. Illumination takes a direct effect on the size of the pupils and induces a non-linear pattern of iris variations. To offset these variations, a proper technique of normalization is required. The most widely employed model is the Daugman's rubber sheet, which transforms the circular region of the iris to a rectangular tablet of a fixed size, using equation (1). This model transforms all pixels in the circular iris into a corresponding position of the polar axes (r, θ), where (r) is the radial space and (θ) is the angle of rotation at the equivalent radii [8], as shown in Figure-1. where I(x, y) corresponds to the iris region. (x, y) and (r,θ) correspond respectively, transform cartesian to polar coordinate, where (θ) has a range of 0 to 2Π and (r) has a range of Rp to Ri. The pupil boundary points are defined as the linear combination of x(r, θ) and y(r, θ), while (xp,yp) is the pupil coordinate of iris. Feature Extraction Extraction of the features is the key assignment in every confirmation method. Choosing an effective feature extractor component is the most critical factor in achieving high authentication rates in the iris identification system. Each iris picture has a special feature that is different from the others. Consequently, it is possible to solve a number of problems in pattern recognition by choosing a better function space. The system is implemented by using two different sets of feature extractors, which are proposed to create the features vector [9]. The features extractors utilized in this paper include the SIFT and the LBP. Scale Invariant Feature Transforms SIFT was suggested by David G. Lowe, the University of British Columbia, in 2004, using distinctive image features of key points. The method extracts key points and computes their descriptors. In the past ten years, SIFT has been widely used in many applications; for example, objects detection, objects classification, stereo correspondence, and motion tracking. The feature extraction is invariant to image scales and rotation, and was shown to provide robust matching across an extensive range of fine distortion, alteration in 3D viewpoint and noise, and change in illumination [10,11]. SIFT algorithm with its four stages is shown below. Detection of Scale Space Extreme The main phase of the SIFT algorithm examines the complete scales and image locations. The scale space of an image is the convolution variable scale with a Gaussian kernel function G (x, y, σ) and the input image I (x, y), where (σ) is scale space factor and (x,y) are space coordinates. The scale-space at different scales is shown in Figure- [12]. Hashim and Al-Khalidy Iraqi Journal of Science, 2021, Vol. 62, No. 4, pp: 1364-1375 1367 The implementation was performed by using difference of Gaussian (DoG) to find potential interest points that are invariant to scales and orientation. DoG is calculated as an Eq.4 and Eq.5 [13]: Algorithm of Scale-Space Extrema Detection (Building a pyramid and DoG) Initialization: sigma // the amount of bluer level in the image. Octaves // number of octaves required for DoG. Levels // number of levels in the octave. Row, Column // new size for the iris image . Input: img() // the digital image of size (M*N) pixels. Step1: read image and convert type image to gray. Step2: convert the size of image (m, n) to equal size as Row, Column. Step3: build the pyramid by down sampling. D{i} \\ structure of three cells; the first cell content array size is ( Row *Column) Pixels, second cell array size is (1/2Row * 1/2Column) pixels, and third cell array size is (1/4 Row *1/4 Column) pixels. The result is three images of different size. Accurate Localization Extreme Points To locate the extreme points of scale space, every sampling point is compared with all neighboring points to realize if the point in its image domain and together scale is larger or smaller than the other points. The sampling point in the center of its neighbors relating a pixel to its 26 neighbors in 3x3 regions at present and adjacent scales, as shown in Figure-3. Key points are designated and founded with measurements of their stability. Thus, key points of low contrast and edge response instability are eliminated. Eigenvalues of the Hessian matrix (H) are computed to eliminate the edge responses, where H is given as in Eq.7, h11 and h22 as in Eq.8, and h12 and h21 as in Eq.9 [14]: where Orientation All key point positions are giving one or more orientations, depending on the direction of the local image gradient. A gradient-oriented histogram (HOG) is created for each key point, which calculates gradient magnitude (m). The orientation histogram takes 36 bins cover and an orientation range of 360 degrees, and computes the orientation (θ). It then detects the highest peak in the histogram as well as some other local peaks within 80 % of the highest peak, as shown in Figure-4. The values of (m) and (θ) are computed by Eq.10 and Eq.11, respectively [15]. Figure 4-Image gradient histogram angle [12]. Key point descriptor Once orientation has been designated, the features descriptor is calculated as in the histograms on (4 x 4) pixel neighborhood orientations, which are set as shown in Figure-5. The histograms of orientation are compared to the orientation of a key point. The histogram covers eight bins, and each descriptor contains an array around the key point of our histogram. This generates a SIFT feature descriptor of (4 x 4 x 8 = 128) elements, while the vector of the descriptor is invariant to rotation, scaling, and lighting [16]. Local Binary Patterns Local binary pattern is used to extract local binary pattern histogram (LBPH) feature for the description of texture. Ojala introduced an LBP operator for texture classification, where he proposed using a two level form of the technique (Texture Spectrum). This offers a robust way to describe pure local binary patterns in a texture. There are only (2 8 =256) possible texture units in the two-level version. In binary instance, the original (3 x 3) area, as shown in (Figure-6), is thresholded via the value of the center pixel (each pixel as a threshold), i.e. its (3 × 3) region is transferred into an 8-bit binary code, as in the following condition : The Proposed System All the steps of the processing system are explained in the following diagram (Figure-7). Steps of the processing system are: 1-Preprocessing: The type of the enrolled image was converted to gray level. 2-Localization and separation of IRIS: This step was used for the separation of iris from eye image. Circular hough transforms was utilized to find circles in pictures. This method is employed in the presence of noise, occlusion, and varying illumination, due to its robustness. Circular hough transforms function was employed to find coordinates of the center of pupil and iris, the radius of pupil (Rp), and the radius of iris (Ri). The next step is the usage of coordinates of the pupil (xp, yp) and coordinates of the iris (xi, yi) to cut the iris region of the entire eye. The radius (R) for each point in the eye image was determined using Eq.13 [7]. The value of (R) is compared with those of Rp and Ri; if it is between Rp and Ri , then it will be treated as an iris point, otherwise it is a non-iris point which is marked by zero value. Figure- The polar coordinate (r, θ), is determined by the pixel coordinates in the original iris image, as shown in Eq.16, Eq.17, and Eq.18 [8]. ( ) √( ) ( ) (16) x= xp + r * sin ( ); y= yp -r * cos ( ) (17) (18) Figure-9 shows a sample image by applying the circular to rectangular normalization procedure. This occlusion increases the difficulty and impacts efficiency extraction features represented by the blue color. It also causes error in the matching processes. Therefore, we proposed an approach of several iris regions to select a region of interest ( ROI) for the iris area while evading the regions that could obstruct features extraction. This region must be with equal coordinates (row, column) to build a DoG pyramid. In this work, we used many areas of iris image which were imposed for experimentation, as shown in Figure-11. Regions that have no eyelids and eyelashes were selected. Also, this process decreased the time of feature extraction, when using sub-regions of iris (ROI) instead of the entire region of iris. 4-Feature extraction: The descriptor vector was formed for each key point. The local feature of the iris image around each key point was extracted by utilizing SIFTs. For each key point created, the descriptor size is 128 =(4×4×8) elements. After that, a coefficient array of features was obtained and converted to local binary pattern, while the results were stored in the feature vector. Finally, a row vector (Iris Code) was created and the feature templates were saved. To reduce image contrast and the effects of changing the lighting conditions, we used the local binary pattern, where each pixel's value varies according to the spatial relationship of the 8 pixels surrounding it [17], as illustrated in Figure-7. 5-The Matching In the matching process, the distance of city blocks was used to determine the absolute difference among two vectors, as in equation (19). The feature vector was obtained by previously mentioned techniques in the proposed system [18]. The suggested method was implemented and tested on a CASIA (V4) database interval. The suggested system was implemented by MATLAB version R2013a program language. System performance The average time values for feature extraction in the iris region (ROI) for A, B, C, and D, are illustrated in Table-1. Hashim and Al-Khalidy Iraqi Journal of Science, 2021, Vol. 62, No. 4, pp: 1364-1375 1373 The time for feature extraction of regions A, B, and C was greater than the time elapsed for feature extraction to region (D). Therefore, the system will depend on region (D) to perform the feature extraction which is used in the matching process. The proposed system used 600 left-and right-eye images from CASIA (V4) database for 30 individuals, each individual has 10 different capture images for the left eye and 10 for the right eye. The sub-region(ROI) of the iris region was used for feature extraction. This region must have equal coordinates (m, n ) . To build the DoG pyramid, feature extraction using SIFT needs to detect the number of octaves, levels, sigma, and threshold. Value of octaves=3, level=3, with various values of sigma and threshold were suggested to eliminate low contrast features. The recognition rate for the proposed system was computed as in Eq.20 [8]. Tables 2 and 3 show an increase in the recognition rate due to certain sigma and threshold values imposed. However, at some sigma and threshold values, there were no features extracted, or the recognition rate was decreased. The results in table 2 were obtained using (5) left-eye images for the enrolled stage and (5) left-eye images for the tested stage. Also, region (D) of the iris was used for feature extraction. Table 3 shows the results of (5) right-eye images for the enrolled stage and (5) right-eye images for the tested stage. (20)  The highest recognition rate, as calculated by Eq.20, was 98.67 for the left-eye and 96.66 for the right-eye, by using SIFT and LBP for the region D, with a value of sigma= (√2) and threshold = 0.07, as illustrate in Figure-12. 3.2Conclusions  Resizing of iris images is not useful because it does not eliminate the occlusion caused by eyelids and eyelashes, as illustrated in Figure-10 using ROI  Taking a large area of iris causes the consumption of more time and storage capacity. Also, it occasionally causes some problems with resources of device (memory, cpu,.. etc.), as explained in Table-1 which shows the computing time of feature extraction, using region D with a size of (100*100)  Fusion of the SIFT method followed by the LBP method produced the best results with minimal errors when the matching processes was performed, as in Eq.19, between the tested and enrolled images.
3,924.2
2021-04-30T00:00:00.000
[ "Computer Science" ]